title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 10. Viewing audit logs | Chapter 10. Viewing audit logs OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 10.1. About the API audit log Audit works at the API server level, logging all requests coming to the server. Each audit log contains the following information: Table 10.1. Audit log fields Field Description level The audit level at which the event was generated. auditID A unique audit ID, generated for each request. stage The stage of the request handling when this event instance was generated. requestURI The request URI as sent by the client to a server. verb The Kubernetes verb associated with the request. For non-resource requests, this is the lowercase HTTP method. user The authenticated user information. impersonatedUser Optional. The impersonated user information, if the request is impersonating another user. sourceIPs Optional. The source IPs, from where the request originated and any intermediate proxies. userAgent Optional. The user agent string reported by the client. Note that the user agent is provided by the client, and must not be trusted. objectRef Optional. The object reference this request is targeted at. This does not apply for List -type requests, or non-resource requests. responseStatus Optional. The response status, populated even when the ResponseObject is not a Status type. For successful responses, this will only include the code. For non-status type error responses, this will be auto-populated with the error message. requestObject Optional. The API object from the request, in JSON format. The RequestObject is recorded as is in the request (possibly re-encoded as JSON), prior to version conversion, defaulting, admission or merging. It is an external versioned object type, and might not be a valid object on its own. This is omitted for non-resource requests and is only logged at request level and higher. responseObject Optional. The API object returned in the response, in JSON format. The ResponseObject is recorded after conversion to the external type, and serialized as JSON. This is omitted for non-resource requests and is only logged at response level. requestReceivedTimestamp The time that the request reached the API server. stageTimestamp The time that the request reached the current audit stage. annotations Optional. An unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain, including authentication, authorization and admission plugins. Note that these annotations are for the audit event, and do not correspond to the metadata.annotations of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example podsecuritypolicy.admission.k8s.io/policy . Values should be short. Annotations are included in the metadata level. Example output for the Kubernetes API server: {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}} 10.2. Viewing the audit logs You can view the logs for the OpenShift API server, Kubernetes API server, OpenShift OAuth API server, and OpenShift OAuth server for each control plane node. Procedure To view the audit logs: View the OpenShift API server audit logs: List the OpenShift API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=openshift-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=openshift-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"381acf6d-5f30-4c7d-8175-c9c317ae5893","stage":"ResponseComplete","requestURI":"/metrics","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"825b60a0-3976-4861-a342-3b2b561e8f82","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.129.2.6"],"userAgent":"Prometheus/2.23.0","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:02:04.086545Z","stageTimestamp":"2021-03-08T18:02:04.107102Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift-monitoring\""}} View the Kubernetes API server audit logs: List the Kubernetes API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=kube-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific Kubernetes API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=kube-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","uid":"2574b041-f3c8-44e6-a057-baef7aa81516","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-scheduler-operator","system:authenticated"]},"sourceIPs":["10.128.0.8"],"userAgent":"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"serviceaccounts","namespace":"openshift-kube-scheduler","name":"openshift-kube-scheduler-sa","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:06:42.512619Z","stageTimestamp":"2021-03-08T18:06:42.516145Z","annotations":{"authentication.k8s.io/legacy-token":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:cluster-kube-scheduler-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\""}} View the OpenShift OAuth API server audit logs: List the OpenShift OAuth API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6","stage":"ResponseComplete","requestURI":"/apis/user.openshift.io/v1/users/~","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.0.32.4","10.128.0.1"],"userAgent":"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"users","name":"~","apiGroup":"user.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T17:47:43.653187Z","stageTimestamp":"2021-03-08T17:47:43.660187Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"basic-users\" of ClusterRole \"basic-user\" to Group \"system:authenticated\""}} View the OpenShift OAuth server audit logs: List the OpenShift OAuth server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-server/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-server/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"13c20345-f33b-4b7d-b3b6-e7793f805621","stage":"ResponseComplete","requestURI":"/login","verb":"post","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.128.2.6"],"userAgent":"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0","responseStatus":{"metadata":{},"code":302},"requestReceivedTimestamp":"2022-05-11T17:31:16.280155Z","stageTimestamp":"2022-05-11T17:31:16.297083Z","annotations":{"authentication.openshift.io/decision":"error","authentication.openshift.io/username":"kubeadmin","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} The possible values for the authentication.openshift.io/decision annotation are allow , deny , or error . 10.3. Filtering audit logs You can use jq or another JSON parsing tool to filter the API server audit logs. Note The amount of information logged to the API server audit logs is controlled by the audit log policy that is set. The following procedure provides examples of using jq to filter audit logs on control plane node node-1.example.com . See the jq Manual for detailed information on using jq . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure Filter OpenShift API server audit logs by user: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.user.username == "myusername")' Filter OpenShift API server audit logs by user agent: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.userAgent == "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat")' Filter Kubernetes API server audit logs by a certain API version and only output the user agent: USD oc adm node-logs node-1.example.com \ --path=kube-apiserver/audit.log \ | jq 'select(.requestURI | startswith("/apis/apiextensions.k8s.io/v1beta1")) | .userAgent' Filter OpenShift OAuth API server audit logs by excluding a verb: USD oc adm node-logs node-1.example.com \ --path=oauth-apiserver/audit.log \ | jq 'select(.verb != "get")' Filter OpenShift OAuth server audit logs by events that identified a username and failed with an error: USD oc adm node-logs node-1.example.com \ --path=oauth-server/audit.log \ | jq 'select(.annotations["authentication.openshift.io/username"] != null and .annotations["authentication.openshift.io/decision"] == "error")' 10.4. Gathering audit logs You can use the must-gather tool to collect the audit logs for debugging your cluster, which you can review or send to Red Hat Support. Procedure Run the oc adm must-gather command with -- /usr/bin/gather_audit_logs : USD oc adm must-gather -- /usr/bin/gather_audit_logs Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 10.5. Additional resources Must-gather tool API audit log event structure Configuring the audit log policy About log forwarding | [
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}",
"oc adm node-logs --role=master --path=openshift-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}",
"oc adm node-logs --role=master --path=kube-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=kube-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-server/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-server/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'",
"oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'",
"oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'",
"oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/audit-log-view |
Chapter 4. About OpenShift Kubernetes Engine | Chapter 4. About OpenShift Kubernetes Engine As of 27 April 2020, Red Hat has decided to rename Red Hat OpenShift Container Engine to Red Hat OpenShift Kubernetes Engine to better communicate what value the product offering delivers. Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise class Kubernetes platform as a production platform for launching containers. You download and install OpenShift Kubernetes Engine the same way as OpenShift Container Platform as they are the same binary distribution, but OpenShift Kubernetes Engine offers a subset of the features that OpenShift Container Platform offers. 4.1. Similarities and differences You can see the similarities and differences between OpenShift Kubernetes Engine and OpenShift Container Platform in the following table: Table 4.1. Product comparison for OpenShift Kubernetes Engine and OpenShift Container Platform OpenShift Kubernetes Engine OpenShift Container Platform Fully Automated Installers Yes Yes Over the Air Smart Upgrades Yes Yes Enterprise Secured Kubernetes Yes Yes Kubectl and oc automated command line Yes Yes Operator Lifecycle Manager (OLM) Yes Yes Administrator Web console Yes Yes OpenShift Virtualization Yes Yes User Workload Monitoring Yes Cluster Monitoring Yes Yes Cost Management SaaS Service Yes Yes Platform Logging Yes Developer Web Console Yes Developer Application Catalog Yes Source to Image and Builder Automation (Tekton) Yes OpenShift Service Mesh (Maistra, Kiali, and Jaeger) Yes OpenShift distributed tracing (Jaeger) Yes OpenShift Serverless (Knative) Yes OpenShift Pipelines (Jenkins and Tekton) Yes Embedded Component of IBM Cloud(R) Pak and RHT MW Bundles Yes OpenShift sandboxed containers Yes 4.1.1. Core Kubernetes and container orchestration OpenShift Kubernetes Engine offers full access to an enterprise-ready Kubernetes environment that is easy to install and offers an extensive compatibility test matrix with many of the software elements that you might use in your data center. OpenShift Kubernetes Engine offers the same service level agreements, bug fixes, and common vulnerabilities and errors protection as OpenShift Container Platform. OpenShift Kubernetes Engine includes a Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) entitlement that allows you to use an integrated Linux operating system with container runtime from the same technology provider. The OpenShift Kubernetes Engine subscription is compatible with the Red Hat OpenShift support for Windows Containers subscription. 4.1.2. Enterprise-ready configurations OpenShift Kubernetes Engine uses the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other standard protections that OpenShift Container Platform offers are available in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers full access to the integrated monitoring solution that OpenShift Container Platform uses, which is based on Prometheus and offers deep coverage and alerting for common Kubernetes issues. OpenShift Kubernetes Engine uses the same installation and upgrade automation as OpenShift Container Platform. 4.1.3. Standard infrastructure services With an OpenShift Kubernetes Engine subscription, you receive support for all storage plugins that OpenShift Container Platform supports. In terms of networking, OpenShift Kubernetes Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to use any third-party SDN that supports OpenShift Container Platform. It also allows you to use the included Open vSwitch software defined network to its fullest extent. OpenShift Kubernetes Engine allows you to take full advantage of the OVN Kubernetes overlay, Multus, and Multus plugins that are supported on OpenShift Container Platform. OpenShift Kubernetes Engine allows customers to use a Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. You can also use the Route API objects that are found in OpenShift Container Platform, including its sophisticated integration with the HAproxy edge routing layer as an out of the box Kubernetes Ingress Controller. 4.1.4. Core user experience OpenShift Kubernetes Engine users have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift Container Platform templates. OpenShift Kubernetes Engine users can use both the oc and kubectl command line interfaces. OpenShift Kubernetes Engine also offers an administrator web-based console that shows all aspects of the deployed container services and offers a container-as-a service experience. OpenShift Kubernetes Engine grants access to the Operator Life Cycle Manager that helps you control access to content on the cluster and life cycle operator-enabled services that you use. With an OpenShift Kubernetes Engine subscription, you receive access to the Kubernetes namespace, the OpenShift Project API object, and cluster-level Prometheus monitoring metrics and events. 4.1.5. Maintained and curated content With an OpenShift Kubernetes Engine subscription, you receive access to the OpenShift Container Platform content from the Red Hat Ecosystem Catalog and Red Hat Connect ISV marketplace. You can access all maintained and curated content that the OpenShift Container Platform eco-system offers. 4.1.6. OpenShift Data Foundation compatible OpenShift Kubernetes Engine is compatible and supported with your purchase of OpenShift Data Foundation. 4.1.7. Red Hat Middleware compatible OpenShift Kubernetes Engine is compatible and supported with individual Red Hat Middleware product solutions. Red Hat Middleware Bundles that include OpenShift embedded in them only contain OpenShift Container Platform. 4.1.8. OpenShift Serverless OpenShift Kubernetes Engine does not include OpenShift Serverless support. Use OpenShift Container Platform for this support. 4.1.9. Quay Integration compatible OpenShift Kubernetes Engine is compatible and supported with a Red Hat Quay purchase. 4.1.10. OpenShift Virtualization OpenShift Kubernetes Engine includes support for the Red Hat product offerings derived from the kubevirt.io open source project. 4.1.11. Advanced cluster management OpenShift Kubernetes Engine is compatible with your additional purchase of Red Hat Advanced Cluster Management (RHACM) for Kubernetes. An OpenShift Kubernetes Engine subscription does not offer a cluster-wide log aggregation solution or support Elasticsearch, Fluentd, or Kibana-based logging solutions. Red Hat OpenShift Service Mesh capabilities derived from the open-source istio.io and kiali.io projects that offer OpenTracing observability for containerized services on OpenShift Container Platform are not supported in OpenShift Kubernetes Engine. 4.1.12. Advanced networking The standard networking solutions in OpenShift Container Platform are supported with an OpenShift Kubernetes Engine subscription. The OpenShift Container Platform Kubernetes CNI plugin for automation of multi-tenant network segmentation between OpenShift Container Platform projects is entitled for use with OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers all the granular control of the source IP addresses that are used by application services on the cluster. Those egress IP address controls are entitled for use with OpenShift Kubernetes Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in OpenShift Container Platform. That ingress solution is supported in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine users are supported for the Kubernetes ingress control object, which offers integrations with public cloud providers. Red Hat Service Mesh, which is derived from the istio.io open source project, is not supported in OpenShift Kubernetes Engine. Also, the Kourier Ingress Controller found in OpenShift Serverless is not supported on OpenShift Kubernetes Engine. 4.1.13. OpenShift sandboxed containers OpenShift Kubernetes Engine does not include OpenShift sandboxed containers. Use OpenShift Container Platform for this support. 4.1.14. Developer experience With OpenShift Kubernetes Engine, the following capabilities are not supported: The OpenShift Container Platform developer experience utilities and tools, such as Red Hat OpenShift Dev Spaces. The OpenShift Container Platform pipeline feature that integrates a streamlined, Kubernetes-enabled Jenkins and Tekton experience in the user's project space. The OpenShift Container Platform source-to-image feature, which allows you to easily deploy source code, dockerfiles, or container images across the cluster. Build strategies, builder pods, or Tekton for end user container deployments. The odo developer command line. The developer persona in the OpenShift Container Platform web console. 4.1.15. Feature summary The following table is a summary of the feature availability in OpenShift Kubernetes Engine and OpenShift Container Platform. Where applicable, it includes the name of the Operator that enables a feature. Table 4.2. Features in OpenShift Kubernetes Engine and OpenShift Container Platform Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Fully Automated Installers (IPI) Included Included N/A Customizable Installers (UPI) Included Included N/A Disconnected Installation Included Included N/A Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) entitlement Included Included N/A Existing RHEL manual attach to cluster (BYO) Included Included N/A CRIO Runtime Included Included N/A Over the Air Smart Upgrades and Operating System (RHCOS) Management Included Included N/A Enterprise Secured Kubernetes Included Included N/A Kubectl and oc automated command line Included Included N/A Auth Integrations, RBAC, SCC, Multi-Tenancy Admission Controller Included Included N/A Operator Lifecycle Manager (OLM) Included Included N/A Administrator web console Included Included N/A OpenShift Virtualization Included Included OpenShift Virtualization Operator Compliance Operator provided by Red Hat Included Included Compliance Operator File Integrity Operator Included Included File Integrity Operator Gatekeeper Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Gatekeeper Operator Klusterlet Not Included - Requires separate subscription Not Included - Requires separate subscription N/A Kube Descheduler Operator provided by Red Hat Included Included Kube Descheduler Operator Local Storage provided by Red Hat Included Included Local Storage Operator Node Feature Discovery provided by Red Hat Included Included Node Feature Discovery Operator Performance Profile controller Included Included N/A PTP Operator provided by Red Hat Included Included PTP Operator Service Telemetry Operator provided by Red Hat Not Included Included Service Telemetry Operator SR-IOV Network Operator Included Included SR-IOV Network Operator Vertical Pod Autoscaler Included Included Vertical Pod Autoscaler Cluster Monitoring (Prometheus) Included Included Cluster Monitoring Device Manager (for example, GPU) Included Included N/A Log Forwarding Included Included Red Hat OpenShift Logging Operator Telemeter and Insights Connected Experience Included Included N/A Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name OpenShift Cloud Manager SaaS Service Included Included N/A OVS and OVN SDN Included Included N/A MetalLB Included Included MetalLB Operator HAProxy Ingress Controller Included Included N/A Ingress Cluster-wide Firewall Included Included N/A Egress Pod and Namespace Granular Control Included Included N/A Ingress Non-Standard Ports Included Included N/A Multus and Available Multus Plugins Included Included N/A Network Policies Included Included N/A IPv6 Single and Dual Stack Included Included N/A CNI Plugin ISV Compatibility Included Included N/A CSI Plugin ISV Compatibility Included Included N/A RHT and IBM(R) middleware a la carte purchases (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A ISV or Partner Operator and Container Compatibility (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A Embedded OperatorHub Included Included N/A Embedded Marketplace Included Included N/A Quay Compatibility (not included) Included Included N/A OpenShift API for Data Protection (OADP) Included Included OADP Operator RHEL Software Collections and RHT SSO Common Service (included) Included Included N/A Embedded Registry Included Included N/A Helm Included Included N/A User Workload Monitoring Not Included Included N/A Cost Management SaaS Service Included Included Cost Management Metrics Operator Platform Logging Not Included Included Red Hat OpenShift Logging Operator OpenShift Elasticsearch Operator provided by Red Hat Not Included Cannot be run standalone N/A Developer Web Console Not Included Included N/A Developer Application Catalog Not Included Included N/A Source to Image and Builder Automation (Tekton) Not Included Included N/A OpenShift Service Mesh Not Included Included OpenShift Service Mesh Operator Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Red Hat OpenShift Serverless Not Included Included OpenShift Serverless Operator Web Terminal provided by Red Hat Not Included Included Web Terminal Operator Red Hat OpenShift Pipelines Operator Not Included Included OpenShift Pipelines Operator Embedded Component of IBM Cloud(R) Pak and RHT MW Bundles Not Included Included N/A Red Hat OpenShift GitOps Not Included Included OpenShift GitOps Red Hat OpenShift Dev Spaces Not Included Included Red Hat OpenShift Dev Spaces Red Hat OpenShift Local Not Included Included N/A Quay Bridge Operator provided by Red Hat Not Included Included Quay Bridge Operator Quay Container Security provided by Red Hat Not Included Included Quay Operator Red Hat OpenShift distributed tracing platform Not Included Included Red Hat OpenShift distributed tracing platform Operator Red Hat OpenShift Kiali Not Included Included Kiali Operator Metering provided by Red Hat (deprecated) Not Included Included N/A Migration Toolkit for Containers Operator Not Included Included Migration Toolkit for Containers Operator Cost management for OpenShift Not included Included N/A JBoss Web Server provided by Red Hat Not included Included JWS Operator Red Hat Build of Quarkus Not included Included N/A Kourier Ingress Controller Not included Included N/A RHT Middleware Bundles Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A IBM Cloud(R) Pak Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A OpenShift Do ( odo ) Not included Included N/A Source to Image and Tekton Builders Not included Included N/A OpenShift Serverless FaaS Not included Included N/A IDE Integrations Not included Included N/A {sandboxed-containers-first} Not included Not included {sandboxed-containers-operator} Windows Machine Config Operator Community Windows Machine Config Operator included - no subscription required Red Hat Windows Machine Config Operator included - Requires separate subscription Windows Machine Config Operator Red Hat Quay Not Included - Requires separate subscription Not Included - Requires separate subscription Quay Operator Red Hat Advanced Cluster Management Not Included - Requires separate subscription Not Included - Requires separate subscription Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security Not Included - Requires separate subscription Not Included - Requires separate subscription N/A OpenShift Data Foundation Not Included - Requires separate subscription Not Included - Requires separate subscription OpenShift Data Foundation Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Ansible Automation Platform Resource Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Ansible Automation Platform Resource Operator Business Automation provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Business Automation Operator Data Grid provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Data Grid Operator Red Hat Integration provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration Operator Red Hat Integration - 3Scale provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale Red Hat Integration - 3Scale APICast gateway provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale APIcast Red Hat Integration - AMQ Broker Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Broker Red Hat Integration - AMQ Broker LTS Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Interconnect Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Interconnect Red Hat Integration - AMQ Online Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Streams Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Streams Red Hat Integration - Camel K Not Included - Requires separate subscription Not Included - Requires separate subscription Camel K Red Hat Integration - Fuse Console Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Console Red Hat Integration - Fuse Online Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Online Red Hat Integration - Service Registry Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Service Registry API Designer provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription API Designer JBoss EAP provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss EAP Smart Gateway Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Smart Gateway Operator Kubernetes NMState Operator Included Included N/A 4.2. Subscription limitations OpenShift Kubernetes Engine is a subscription offering that provides OpenShift Container Platform with a limited set of supported features at a lower list price. OpenShift Kubernetes Engine and OpenShift Container Platform are the same product and, therefore, all software and features are delivered in both. There is only one download, OpenShift Container Platform. OpenShift Kubernetes Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/about/oke-about |
Chapter 3. Setting Up DM-Multipath | Chapter 3. Setting Up DM-Multipath This chapter provides step-by-step example procedures for configuring DM-Multipath. It includes the following procedures: Basic DM-Multipath setup Ignoring local disks Adding more devices to the configuration file 3.1. Setting Up DM-Multipath Before setting up DM-Multipath on your system, ensure that your system has been updated and includes the device-mapper-multipath package. Use the following procedure to set up DM-Multipath for a basic failover configuration. Edit the /etc/multipath.conf file by commenting out the following lines at the top of the file. This section of the configuration file, in its initial state, blacklists all devices. You must comment it out to enable multipathing. After commenting out those lines, this section appears as follows. The default settings for DM-Multipath are compiled in to the system and do not need to be explicitly set in the /etc/multipath.conf file. The default value of path_grouping_policy is set to failover , so in this example you do not need to change the default value. For information on changing the values in the configuration file to something other than the defaults, see Chapter 4, The DM-Multipath Configuration File . The initial defaults section of the configuration file configures your system that the names of the multipath devices are are of the form mpath n ; without this setting, the names of the multipath devices would be aliased to the WWID of the device. Save the configuration file and exit the editor. Execute the following commands: The multipath -v2 command prints out multipathed paths that show which devices are multipathed, but only for the devices created by this command. If the command does yield any output, you can check your multipath devices as follows: Run the multipath -ll command. This lists all the multipath devices. If running the multipath -ll command does not show the device, verify that multipath is configured properly by checking the /etc/multipath file and making sure that the SCSI devices you want to be multipathed exist on the system. If the SCSI devices do not appear, ensure that all SAN connections are set up properly. For further information on the multipath command and its output, see Section 5.1, "Multipath Command Output" , see Section 5.2, "Multipath Queries with multipath Command" , and see Section 5.3, "Multipath Command Options" . Execute the following command to ensure sure that the multipath daemon starts on bootup: Since the value of user_friendly_name is set to yes in the configuration filea the multipath devices will be created as /dev/mapper/mpath n . For information on setting the name of the device to an alias of your choosing, see Chapter 4, The DM-Multipath Configuration File . | [
"devnode_blacklist { devnode \"*\" }",
"devnode_blacklist { devnode \"*\" }",
"modprobe dm-multipath service multipathd start multipath -v2",
"chkconfig multipathd on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/mpio_setup |
Chapter 4. Running a multi-node environment | Chapter 4. Running a multi-node environment A multi-node environment comprises a number of nodes that operate as a cluster. You can have a cluster of replicated ZooKeeper nodes and a cluster of broker nodes, with topic replication across the brokers. Multi-node environments offer stability and availability. 4.1. Running a multi-node ZooKeeper cluster Configure and run ZooKeeper as a multi-node cluster. Prerequisites AMQ Streams is installed on all hosts which will be used as ZooKeeper cluster nodes. Running the cluster Create the myid file in /var/lib/zookeeper/ . Enter ID 1 for the first ZooKeeper node, 2 for the second ZooKeeper node, and so on. For example: Edit the ZooKeeper /opt/kafka/config/zookeeper.properties configuration file for the following: Set the option dataDir to /var/lib/zookeeper/ . Configure the initLimit and syncLimit options. Configure the reconfigEnabled and standaloneEnabled options. Add a list of all ZooKeeper nodes. The list should include also the current node. Example configuration for a node of ZooKeeper cluster with five members tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false listener.security.protocol.map=PLAINTEXT:PLAINTEXT,REPLICATION:PLAINTEXT server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181 server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181 server.5=172.17.0.5:2888:3888:participant;172.17.0.5:2181 Start ZooKeeper with the default configuration file. su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties Verify that ZooKeeper is running. jcmd | grep zookeeper Returns: number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties Repeat this procedure on all the nodes of the cluster. Verify that all nodes are members of the cluster by sending a stat command to each of the nodes using the ncat utility. Use ncat stat to check the node status echo stat | ncat localhost 2181 To use four-letter word commands, like stat , you need to specify 4lw.commands.whitelist=* in zookeeper.properties . The output shows that a node is either a leader or follower . Example output from the ncat command 4.2. Running a multi-node Kafka cluster Configure and run Kafka as a multi-node cluster. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. A ZooKeeper cluster is configured and running . Running the cluster For each Kafka broker in your AMQ Streams cluster: Edit the /opt/kafka/config/server.properties Kafka configuration file as follows: Set the broker.id field to 0 for the first broker, 1 for the second broker, and so on. Configure the details for connecting to ZooKeeper in the zookeeper.connect option. Configure the Kafka listeners. Set the directories where the commit logs should be stored in the logs.dir directory. Here we see an example configuration for a Kafka broker: broker.id=0 zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=REPLICATION://:9091,PLAINTEXT://:9092 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,REPLICATION:PLAINTEXT inter.broker.listener.name=REPLICATION log.dirs=/var/lib/kafka In a typical installation where each Kafka broker is running on identical hardware, only the broker.id configuration property will differ between each broker config. Start the Kafka broker with the default configuration file. su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka Returns: number kafka.Kafka /opt/kafka/config/server.properties Verify that all nodes are members of the Kafka cluster by sending a dump command to one of the ZooKeeper nodes using the ncat utility. Use ncat dump to check all Kafka brokers registered in ZooKeeper echo dump | ncat zoo1.my-domain.com 2181 To use four-letter word commands, like dump , you need to specify 4lw.commands.whitelist=* in zookeeper.properties . The output must contain all Kafka brokers you just configured and started. Example output from the ncat command for a Kafka cluster with 3 nodes SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2 4.3. Performing a graceful rolling restart of Kafka brokers This procedure shows how to do a graceful rolling restart of brokers in a multi-node cluster. A rolling restart is usually required following an upgrade or change to the Kafka cluster configuration properties. Note Some broker configurations do not need a restart of the broker. For more information, see Updating Broker Configs in the Apache Kafka documentation. After you perform a restart of a broker, check for under-replicated topic partitions to make sure that replica partitions have caught up. You can only perform a graceful restart, with no loss of availability, if you are replicating topics and ensuring that at least one replica is in sync. For a multi-node cluster, the standard approach is to have a topic replication factor of at least 3 and a minimum number of in-sync replicas set to 1 less than the replication factor. If you are using acks=all in your producer configuration for data durability, check that the broker you restarted is in sync with all the partitions it's replicating before restarting the broker. Single-node clusters are unavailable during a restart, since all partitions are on the same broker. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. A ZooKeeper cluster is configured and running . The Kafka cluster is operating as expected. Check for under-replicated partitions or any other issues affecting broker operation. The steps in this procedure describe how to check for under-replicated partitions. Procedure Perform the following steps on each Kafka broker. Complete the steps on the first broker before moving on to the . Perform the steps on the broker that's the active controller last. Otherwise, the active controller needs to change on more than one restart. Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh Make any changes to the broker configuration that require a restart after completion. For further information, see the following: Configuring Kafka Upgrading Kafka brokers and ZooKeeper Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Check that Kafka is running: jcmd | grep kafka Returns: number kafka.Kafka /opt/kafka/config/server.properties Verify that all nodes are members of the Kafka cluster by sending a dump command to one of the ZooKeeper nodes using the ncat utility. Use ncat dump to check all Kafka brokers registered in ZooKeeper echo dump | ncat zoo1.my-domain.com 2181 To use four-letter word commands, like dump , you need to specify 4lw.commands.whitelist=* in zookeeper.properties . The output must contain the Kafka broker you started. Example output from the ncat command for a Kafka cluster with 3 nodes SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2 Wait until the broker has zero under-replicated partitions. You can check from the command line or use metrics. Use the kafka-topics.sh command with the --under-replicated-partitions parameter: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <bootstrap_address> --describe --under-replicated-partitions For example: /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --under-replicated-partitions The command provides a list of topics with under-replicated partitions in a cluster. Topics with under-replicated partitions Topic: topic3 Partition: 4 Leader: 2 Replicas: 2,3 Isr: 2 Topic: topic3 Partition: 5 Leader: 3 Replicas: 1,2 Isr: 1 Topic: topic1 Partition: 1 Leader: 3 Replicas: 1,3 Isr: 3 # ... Under-replicated partitions are listed if the ISR (in-sync replica) count is less than the number of replicas. If a list is not returned, there are no under-replicated partitions. Use the UnderReplicatedPartitions metric: kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions The metric provides a count of partitions where replicas have not caught up. You wait until the count is zero. Tip Use the Kafka Exporter to create an alert when there are one or more under-replicated partitions for a topic. Checking logs when restarting If a broker fails to start, check the application logs for information. You can also check the status of a broker shutdown and restart in the /opt/kafka/logs/server.log application log. Log for the successful shutdown of a broker # ... [2022-06-08 14:32:29,885] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler) [2022-06-08 14:32:29,886] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer) [2022-06-08 14:32:29,887] INFO [KafkaServer id=0] Starting controlled shutdown (kafka.server.KafkaServer) [2022-06-08 14:32:29,896] INFO [KafkaServer id=0] Controlled shutdown request returned successfully after 6ms (kafka.server.KafkaServer) # ... Log for the successful restart of a broker # ... [2022-06-08 14:39:35,245] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) # ... Additional resources Section 20.4, "Analyzing Kafka JMX metrics for troubleshooting" Chapter 10, Configuring logging for Kafka components Kafka configuration tuning | [
"su - kafka echo \" <NodeID> \" > /var/lib/zookeeper/myid",
"su - kafka echo \"1\" > /var/lib/zookeeper/myid",
"tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false listener.security.protocol.map=PLAINTEXT:PLAINTEXT,REPLICATION:PLAINTEXT server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181 server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181 server.5=172.17.0.5:2888:3888:participant;172.17.0.5:2181",
"su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"jcmd | grep zookeeper",
"number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties",
"echo stat | ncat localhost 2181",
"ZooKeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT Clients: /0:0:0:0:0:0:0:1:59726[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/0/0 Received: 2 Sent: 1 Connections: 1 Outstanding: 0 Zxid: 0x200000000 Mode: follower Node count: 4",
"broker.id=0 zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=REPLICATION://:9091,PLAINTEXT://:9092 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,REPLICATION:PLAINTEXT inter.broker.listener.name=REPLICATION log.dirs=/var/lib/kafka",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep Kafka",
"number kafka.Kafka /opt/kafka/config/server.properties",
"echo dump | ncat zoo1.my-domain.com 2181",
"SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2",
"/opt/kafka/bin/kafka-server-stop.sh",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep kafka",
"number kafka.Kafka /opt/kafka/config/server.properties",
"echo dump | ncat zoo1.my-domain.com 2181",
"SessionTracker dump: org.apache.zookeeper.server.quorum.LearnerSessionTracker@28848ab9 ephemeral nodes dump: Sessions with Ephemerals (3): 0x20000015dd00000: /brokers/ids/1 0x10000015dc70000: /controller /brokers/ids/0 0x10000015dc70001: /brokers/ids/2",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <bootstrap_address> --describe --under-replicated-partitions",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --under-replicated-partitions",
"Topic: topic3 Partition: 4 Leader: 2 Replicas: 2,3 Isr: 2 Topic: topic3 Partition: 5 Leader: 3 Replicas: 1,2 Isr: 1 Topic: topic1 Partition: 1 Leader: 3 Replicas: 1,3 Isr: 3 ...",
"kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions",
"[2022-06-08 14:32:29,885] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler) [2022-06-08 14:32:29,886] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer) [2022-06-08 14:32:29,887] INFO [KafkaServer id=0] Starting controlled shutdown (kafka.server.KafkaServer) [2022-06-08 14:32:29,896] INFO [KafkaServer id=0] Controlled shutdown request returned successfully after 6ms (kafka.server.KafkaServer)",
"[2022-06-08 14:39:35,245] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-multi-node-str |
Chapter 11. ClustersService | Chapter 11. ClustersService 11.1. GetClusterDefaultValues GET /v1/cluster-defaults 11.1.1. Description 11.1.2. Parameters 11.1.3. Return Type V1ClusterDefaultsResponse 11.1.4. Content Type application/json 11.1.5. Responses Table 11.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterDefaultsResponse 0 An unexpected error response. GooglerpcStatus 11.1.6. Samples 11.1.7. Common object reference 11.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.1.7.3. V1ClusterDefaultsResponse Field Name Required Nullable Type Description Format mainImageRepository String collectorImageRepository String kernelSupportAvailable Boolean 11.2. GetKernelSupportAvailable GET /v1/clusters-env/kernel-support-available GetKernelSupportAvailable is deprecated in favor of GetClusterDefaultValues. 11.2.1. Description 11.2.2. Parameters 11.2.3. Return Type V1KernelSupportAvailableResponse 11.2.4. Content Type application/json 11.2.5. Responses Table 11.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1KernelSupportAvailableResponse 0 An unexpected error response. GooglerpcStatus 11.2.6. Samples 11.2.7. Common object reference 11.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.2.7.3. V1KernelSupportAvailableResponse Field Name Required Nullable Type Description Format kernelSupportAvailable Boolean 11.3. GetClusters GET /v1/clusters 11.3.1. Description 11.3.2. Parameters 11.3.2.1. Query Parameters Name Description Required Default Pattern query - null 11.3.3. Return Type V1ClustersList 11.3.4. Content Type application/json 11.3.5. Responses Table 11.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClustersList 0 An unexpected error response. GooglerpcStatus 11.3.6. Samples 11.3.7. Common object reference 11.3.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.3.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.3.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.3.7.4. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.3.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.3.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.3.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.3.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.3.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.3.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.3.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.3.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.3.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.3.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.3.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.3.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.3.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.3.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.3.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.3.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.3.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.3.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.3.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.3.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.3.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.3.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.3.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.3.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.3.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.3.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.3.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.3.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.3.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.3.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.3.7.34. V1ClustersList Field Name Required Nullable Type Description Format clusters List of StorageCluster clusterIdToRetentionInfo Map of V1DecommissionedClusterRetentionInfo 11.3.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.4. DeleteCluster DELETE /v1/clusters/{id} 11.4.1. Description 11.4.2. Parameters 11.4.2.1. Path Parameters Name Description Required Default Pattern id X null 11.4.3. Return Type Object 11.4.4. Content Type application/json 11.4.5. Responses Table 11.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 11.4.6. Samples 11.4.7. Common object reference 11.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.5. GetCluster GET /v1/clusters/{id} 11.5.1. Description 11.5.2. Parameters 11.5.2.1. Path Parameters Name Description Required Default Pattern id X null 11.5.3. Return Type V1ClusterResponse 11.5.4. Content Type application/json 11.5.5. Responses Table 11.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. GooglerpcStatus 11.5.6. Samples 11.5.7. Common object reference 11.5.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.5.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.5.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.5.7.4. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.5.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.5.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.5.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.5.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.5.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.5.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.5.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.5.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.5.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.5.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.5.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.5.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.5.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.5.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.5.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.5.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.5.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.5.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.5.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.5.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.5.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.5.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.5.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.5.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.5.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.5.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.5.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.5.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.5.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.5.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.5.7.34. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.5.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.6. PutCluster PUT /v1/clusters/{id} 11.6.1. Description 11.6.2. Parameters 11.6.2.1. Path Parameters Name Description Required Default Pattern id X null 11.6.2.2. Body Parameter Name Description Required Default Pattern body ClustersServicePutClusterBody X 11.6.3. Return Type V1ClusterResponse 11.6.4. Content Type application/json 11.6.5. Responses Table 11.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. GooglerpcStatus 11.6.6. Samples 11.6.7. Common object reference 11.6.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.6.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.6.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.6.7.4. ClustersServicePutClusterBody Field Name Required Nullable Type Description Format name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.6.7.5. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.6.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.6.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.6.7.7. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.6.7.8. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.6.7.9. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.6.7.10. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.6.7.11. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.6.7.12. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.6.7.13. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.6.7.14. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.6.7.15. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.6.7.16. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.6.7.17. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.6.7.18. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.6.7.19. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.6.7.20. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.6.7.21. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.6.7.22. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.6.7.23. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.6.7.24. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.6.7.25. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.6.7.26. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.6.7.27. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.6.7.28. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.6.7.29. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.6.7.30. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.6.7.31. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.6.7.32. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.6.7.33. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.6.7.34. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.6.7.35. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.6.7.36. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.7. PostCluster POST /v1/clusters 11.7.1. Description 11.7.2. Parameters 11.7.2.1. Body Parameter Name Description Required Default Pattern body StorageCluster X 11.7.3. Return Type V1ClusterResponse 11.7.4. Content Type application/json 11.7.5. Responses Table 11.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. GooglerpcStatus 11.7.6. Samples 11.7.7. Common object reference 11.7.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.7.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.7.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.7.7.4. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 11.7.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.7.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 11.7.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.7.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.7.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.7.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.7.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.7.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.7.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.7.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.7.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.7.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.7.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.7.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.7.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.7.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.7.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.7.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.7.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.7.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.7.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.7.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.7.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.7.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). The following fields are made optional/nullable because there can be errors when trying to obtain them and the default value of 0 might be confusing with the actual value 0. In case an error happens when trying to obtain a certain field, it will be absent (instead of having the default value). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.7.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.7.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.7.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.7.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.7.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.7.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.7.7.34. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.7.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/clustersservice |
Chapter 7. Using image streams with Kubernetes resources | Chapter 7. Using image streams with Kubernetes resources Image streams, being OpenShift Container Platform native resources, work out of the box with all the rest of native resources available in OpenShift Container Platform, such as builds or deployments. It is also possible to make them work with native Kubernetes resources, such as jobs, replication controllers, replica sets or Kubernetes deployments. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Note This feature can not be used in the default namespace, nor in any openshift- or kube- namespace. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false | [
"oc set image-lookup mysql",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true",
"oc set image-lookup imagestream --list",
"oc set image-lookup deploy/mysql",
"apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql",
"oc set image-lookup deploy/mysql --enabled=false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/using-imagestreams-with-kube-resources |
Chapter 10. VolumeAttachment [storage.k8s.io/v1] | Chapter 10. VolumeAttachment [storage.k8s.io/v1] Description VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node. VolumeAttachment objects are non-namespaced. Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object VolumeAttachmentSpec is the specification of a VolumeAttachment request. status object VolumeAttachmentStatus is the status of a VolumeAttachment request. 10.1.1. .spec Description VolumeAttachmentSpec is the specification of a VolumeAttachment request. Type object Required attacher source nodeName Property Type Description attacher string attacher indicates the name of the volume driver that MUST handle this request. This is the name returned by GetPluginName(). nodeName string nodeName represents the node that the volume should be attached to. source object VolumeAttachmentSource represents a volume that should be attached. Right now only PersistenVolumes can be attached via external attacher, in future we may allow also inline volumes in pods. Exactly one member can be set. 10.1.2. .spec.source Description VolumeAttachmentSource represents a volume that should be attached. Right now only PersistenVolumes can be attached via external attacher, in future we may allow also inline volumes in pods. Exactly one member can be set. Type object Property Type Description inlineVolumeSpec PersistentVolumeSpec inlineVolumeSpec contains all the information necessary to attach a persistent volume defined by a pod's inline VolumeSource. This field is populated only for the CSIMigration feature. It contains translated fields from a pod's inline VolumeSource to a PersistentVolumeSpec. This field is beta-level and is only honored by servers that enabled the CSIMigration feature. persistentVolumeName string persistentVolumeName represents the name of the persistent volume to attach. 10.1.3. .status Description VolumeAttachmentStatus is the status of a VolumeAttachment request. Type object Required attached Property Type Description attachError object VolumeError captures an error encountered during a volume operation. attached boolean attached indicates the volume is successfully attached. This field must only be set by the entity completing the attach operation, i.e. the external-attacher. attachmentMetadata object (string) attachmentMetadata is populated with any information returned by the attach operation, upon successful attach, that must be passed into subsequent WaitForAttach or Mount calls. This field must only be set by the entity completing the attach operation, i.e. the external-attacher. detachError object VolumeError captures an error encountered during a volume operation. 10.1.4. .status.attachError Description VolumeError captures an error encountered during a volume operation. Type object Property Type Description message string message represents the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information. time Time time represents the time the error was encountered. 10.1.5. .status.detachError Description VolumeError captures an error encountered during a volume operation. Type object Property Type Description message string message represents the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information. time Time time represents the time the error was encountered. 10.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/volumeattachments DELETE : delete collection of VolumeAttachment GET : list or watch objects of kind VolumeAttachment POST : create a VolumeAttachment /apis/storage.k8s.io/v1/watch/volumeattachments GET : watch individual changes to a list of VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/volumeattachments/{name} DELETE : delete a VolumeAttachment GET : read the specified VolumeAttachment PATCH : partially update the specified VolumeAttachment PUT : replace the specified VolumeAttachment /apis/storage.k8s.io/v1/watch/volumeattachments/{name} GET : watch changes to an object of kind VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/storage.k8s.io/v1/volumeattachments/{name}/status GET : read status of the specified VolumeAttachment PATCH : partially update status of the specified VolumeAttachment PUT : replace status of the specified VolumeAttachment 10.2.1. /apis/storage.k8s.io/v1/volumeattachments HTTP method DELETE Description delete collection of VolumeAttachment Table 10.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind VolumeAttachment Table 10.3. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachmentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeAttachment Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 202 - Accepted VolumeAttachment schema 401 - Unauthorized Empty 10.2.2. /apis/storage.k8s.io/v1/watch/volumeattachments HTTP method GET Description watch individual changes to a list of VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead. Table 10.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /apis/storage.k8s.io/v1/volumeattachments/{name} Table 10.8. Global path parameters Parameter Type Description name string name of the VolumeAttachment HTTP method DELETE Description delete a VolumeAttachment Table 10.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.10. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 202 - Accepted VolumeAttachment schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeAttachment Table 10.11. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeAttachment Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeAttachment Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.15. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty 10.2.4. /apis/storage.k8s.io/v1/watch/volumeattachments/{name} Table 10.17. Global path parameters Parameter Type Description name string name of the VolumeAttachment HTTP method GET Description watch changes to an object of kind VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /apis/storage.k8s.io/v1/volumeattachments/{name}/status Table 10.19. Global path parameters Parameter Type Description name string name of the VolumeAttachment HTTP method GET Description read status of the specified VolumeAttachment Table 10.20. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeAttachment Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeAttachment Table 10.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.24. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.25. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage_apis/volumeattachment-storage-k8s-io-v1 |
Chapter 8. Conclusion | Chapter 8. Conclusion In the sections of this guide we walked through the primary workflows necessary to get started using automation services catalog. Following these workflows you have: Created the necessary groups and users to use both Catalog and Approval through the User access, Connected to a source platform, Created portfolios, Added products from the source platform into the portfolio, Created and set approval processes for the portfolio, Shared the portfolio with users, Approved or denied orders created by users, and comment on orders where necessary. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/conclusion |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/providing-feedback-on-red-hat-documentation_rosa |
Chapter 7. Installation configuration parameters for the Agent-based Installer | Chapter 7. Installation configuration parameters for the Agent-based Installer Before you deploy an OpenShift Container Platform cluster using the Agent-based Installer, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml and agent-config.yaml files, you must provide values for the required parameters, and you can use the optional parameters to customize your cluster further. 7.1. Available installation configuration parameters The following tables specify the required and optional installation configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the install-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: baremetal , external , none , or vsphere . Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. baremetal , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. baremetal , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 7.2. Available Agent configuration parameters The following tables specify the required and optional Agent configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the agent-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.2.1. Required configuration parameters Required Agent configuration parameters are described in the following table: Table 7.4. Required parameters Parameter Description Values The API version for the agent-config.yaml content. The current version is v1beta1 . The installation program might also support older API versions. String Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . The value entered in the agent-config.yaml file is ignored, and instead the value specified in the install-config.yaml file is used. When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters and hyphens ( - ), such as dev . 7.2.2. Optional configuration parameters Optional Agent configuration parameters are described in the following table: Table 7.5. Optional parameters Parameter Description Values The IP address of the node that performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . IPv4 or IPv6 address. The URL of the server to upload Preboot Execution Environment (PXE) assets to when using the Agent-based Installer to generate an iPXE script. For more information, see "Preparing PXE assets for OpenShift Container Platform". String. A list of Network Time Protocol (NTP) sources to be added to all cluster hosts, which are added to any NTP sources that are configured through other means. List of hostnames or IP addresses. Host configuration. An optional list of hosts. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the agent-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Defines whether the host is a master or worker node. If no role is defined in the agent-config.yaml file, roles will be assigned at random during cluster installation. master or worker . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. This is the device that the operating system is written on during installation. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Additional resources Preparing PXE assets for OpenShift Container Platform Root device hints | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"apiVersion:",
"metadata:",
"metadata: name:",
"rendezvousIP:",
"bootArtifactsBaseURL:",
"additionalNTPSources:",
"hosts:",
"hosts: hostname:",
"hosts: interfaces:",
"hosts: interfaces: name:",
"hosts: interfaces: macAddress:",
"hosts: role:",
"hosts: rootDeviceHints:",
"hosts: rootDeviceHints: deviceName:",
"hosts: networkConfig:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installation-config-parameters-agent |
Chapter 9. OpenStack Cloud Controller Manager reference guide | Chapter 9. OpenStack Cloud Controller Manager reference guide 9.1. The OpenStack Cloud Controller Manager Beginning with OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) were switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called cloud-provider-config in the openshift-config namespace. Note The config map name cloud-provider-config is not statically configured. It is derived from the spec.cloudConfig.name value in the infrastructure/cluster CRD. Found configurations are synchronized to the cloud-conf config map in the openshift-cloud-controller-manager namespace. As part of this synchronization, the OpenStack CCM Operator alters the new config map such that its properties are compatible with the external cloud provider. The file is changed in the following ways: The [Global] secret-name , [Global] secret-namespace , and [Global] kubeconfig-path options are removed. They do not apply to the external cloud provider. The [Global] use-clouds , [Global] clouds-file , and [Global] cloud options are added. The entire [BlockStorage] section is removed. External cloud providers no longer perform storage operations. Block storage configuration is managed by the Cinder CSI driver. Additionally, the CCM Operator enforces a number of default options. Values for these options are always overriden as follows: [Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack ... [LoadBalancer] enabled = true 1 1 If the network is configured to use Kuryr, this value is false . The clouds-value value, /etc/openstack/secret/clouds.yaml , is mapped to the openstack-cloud-credentials config in the openshift-cloud-controller-manager namespace. You can modify the RHOSP cloud in this file as you do any other clouds.yaml file. 9.2. The OpenStack Cloud Controller Manager (CCM) config map An OpenStack CCM config map defines how your cluster interacts with your RHOSP cloud. By default, this configuration is stored under the cloud.conf key in the cloud-conf config map in the openshift-cloud-controller-manager namespace. Important The cloud-conf config map is generated from the cloud-provider-config config map in the openshift-config namespace. To change the settings that are described by the cloud-conf config map, modify the cloud-provider-config config map. As part of this synchronization, the CCM Operator overrides some options. For more information, see "The RHOSP Cloud Controller Manager". For example: An example cloud-conf config map apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: "2022-12-20T17:01:08Z" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: "2519" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677 1 Set global options by using a clouds.yaml file rather than modifying the config map. The following options are present in the config map. Except when indicated otherwise, they are mandatory for clusters that run on RHOSP. 9.2.1. Load balancer options CCM supports several load balancer options for deployments that use Octavia. Note Neutron-LBaaS support is deprecated. Option Description enabled Whether or not to enable the LoadBalancer type of services integration. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. floating-subnet-id Optional. The external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-id . floating-subnet Optional. A name pattern (glob or regular expression if starting with ~ ) for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet . If multiple subnets match the pattern, the first one with available IP addresses is used. floating-subnet-tags Optional. Tags for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-tags . If multiple subnets match these tags, the first one with available IP addresses is used. If the RHOSP network is configured with sharing disabled, for example, with the --no-share flag used during creation, this option is unsupported. Set the network to share to use this option. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. For dual stack deployments, leave this option unset. The OpenStack cloud provider automatically selects which subnet to use for a load balancer. network-id The ID of the Networking service network on which load balancer VIPs are created. Unnecessary if subnet-id is set. If this property is not set, the network is automatically selected based on the network that cluster nodes use. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . internal-lb Whether or not to create an internal load balancer without floating IP addresses. The default value is false . LoadBalancerClass "ClassName" This is a config section that comprises a set of options: floating-network-id floating-subnet-id floating-subnet floating-subnet-tags network-id subnet-id The behavior of these options is the same as that of the identically named options in the load balancer section of the CCM config file. You can set the ClassName value by specifying the service annotation loadbalancer.openstack.org/class . max-shared-lb The maximum number of services that can share a load balancer. The default value is 2 . 9.2.2. Options that the Operator overrides The CCM Operator overrides the following options, which you might recognize from configuring RHOSP. Do not configure them yourself. They are included in this document for informational purposes only. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . os-endpoint-type The type of endpoint to use from the service catalog. username The Identity service user name. password The Identity service user password. domain-id The Identity service user domain ID. domain-name The Identity service user domain name. tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. tenant-domain-id The Identity service project domain ID. tenant-domain-name The Identity service project domain name. user-domain-id The Identity service user domain ID. user-domain-name The Identity service user domain name. use-clouds Whether or not to fetch authorization credentials from a clouds.yaml file. Options set in this section are prioritized over values read from the clouds.yaml file. CCM searches for the file in the following places: The value of the clouds-file option. A file path stored in the environment variable OS_CLIENT_CONFIG_FILE . The directory pkg/openstack . The directory ~/.config/openstack . The directory /etc/openstack . clouds-file The file path of a clouds.yaml file. It is used if the use-clouds option is set to true . cloud The named cloud in the clouds.yaml file that you want to use. It is used if the use-clouds option is set to true . | [
"[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true 1",
"apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_openstack/installing-openstack-cloud-config-reference |
Chapter 33. MetadataService | Chapter 33. MetadataService 33.1. GetDatabaseBackupStatus GET /v1/backup/status 33.1.1. Description 33.1.2. Parameters 33.1.3. Return Type V1DatabaseBackupStatus 33.1.4. Content Type application/json 33.1.5. Responses Table 33.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1DatabaseBackupStatus 0 An unexpected error response. RuntimeError 33.1.6. Samples 33.1.7. Common object reference 33.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 33.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 33.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 33.1.7.3. StorageBackupInfo Field Name Required Nullable Type Description Format backupLastRunAt Date date-time status StorageOperationStatus FAIL, PASS, requestor StorageSlimUser 33.1.7.4. StorageOperationStatus Enum Values FAIL PASS 33.1.7.5. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 33.1.7.6. V1DatabaseBackupStatus Field Name Required Nullable Type Description Format backupInfo StorageBackupInfo 33.2. GetCentralCapabilities GET /v1/central-capabilities 33.2.1. Description 33.2.2. Parameters 33.2.3. Return Type V1CentralServicesCapabilities 33.2.4. Content Type application/json 33.2.5. Responses Table 33.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1CentralServicesCapabilities 0 An unexpected error response. RuntimeError 33.2.6. Samples 33.2.7. Common object reference 33.2.7.1. CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable: CapabilityAvailable means that UI and APIs should be available for users to use. This does not automatically mean that the functionality is 100% available and any calls to APIs will result in successful execution. Rather it means that users should be allowed to leverage the functionality as opposed to CapabilityDisabled when functionality should be blocked. CapabilityDisabled: CapabilityDisabled means the corresponding UI should be disabled and attempts to use related APIs should lead to errors. Enum Values CapabilityAvailable CapabilityDisabled 33.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 33.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 33.2.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 33.2.7.4. V1CentralServicesCapabilities Provides availability of certain functionality of Central Services in the current configuration. The initial intended use is to disable certain functionality that does not make sense in the Cloud Service context. Field Name Required Nullable Type Description Format centralScanningCanUseContainerIamRoleForEcr CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable, CapabilityDisabled, centralCanUseCloudBackupIntegrations CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable, CapabilityDisabled, centralCanDisplayDeclarativeConfigHealth CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable, CapabilityDisabled, centralCanUpdateCert CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable, CapabilityDisabled, centralCanUseAcscsEmailIntegration CentralServicesCapabilitiesCapabilityStatus CapabilityAvailable, CapabilityDisabled, 33.3. GetDatabaseStatus GET /v1/database/status 33.3.1. Description 33.3.2. Parameters 33.3.3. Return Type V1DatabaseStatus 33.3.4. Content Type application/json 33.3.5. Responses Table 33.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1DatabaseStatus 0 An unexpected error response. RuntimeError 33.3.6. Samples 33.3.7. Common object reference 33.3.7.1. DatabaseStatusDatabaseType Enum Values Hidden RocksDB PostgresDB 33.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 33.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 33.3.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 33.3.7.4. V1DatabaseStatus Field Name Required Nullable Type Description Format databaseAvailable Boolean databaseType DatabaseStatusDatabaseType Hidden, RocksDB, PostgresDB, databaseVersion String 33.4. GetMetadata GET /v1/metadata 33.4.1. Description 33.4.2. Parameters 33.4.3. Return Type V1Metadata 33.4.4. Content Type application/json 33.4.5. Responses Table 33.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1Metadata 0 An unexpected error response. RuntimeError 33.4.6. Samples 33.4.7. Common object reference 33.4.7.1. MetadataLicenseStatus Enum Values NONE INVALID EXPIRED RESTARTING VALID 33.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 33.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 33.4.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 33.4.7.4. V1Metadata Field Name Required Nullable Type Description Format version String buildFlavor String releaseBuild Boolean licenseStatus MetadataLicenseStatus NONE, INVALID, EXPIRED, RESTARTING, VALID, 33.5. TLSChallenge GET /v1/tls-challenge TLSChallenge 33.5.1. Description Returns all trusted CAs, i.e., secret/additional-ca and Central's cert chain. This is necessary if Central is running behind a load balancer with self-signed certificates. Does not require authentication. 33.5.2. Parameters 33.5.2.1. Query Parameters Name Description Required Default Pattern challengeToken generated challenge token by the service asking for TLS certs. - null 33.5.3. Return Type V1TLSChallengeResponse 33.5.4. Content Type application/json 33.5.5. Responses Table 33.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1TLSChallengeResponse 0 An unexpected error response. RuntimeError 33.5.6. Samples 33.5.7. Common object reference 33.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 33.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 33.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 33.5.7.3. V1TLSChallengeResponse Field Name Required Nullable Type Description Format trustInfoSerialized byte[] byte signature byte[] byte | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/metadataservice |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/camel_extensions_for_quarkus_reference/pr01 |
C.2. Selection Criteria Operators | C.2. Selection Criteria Operators Table C.2, "Selection Criteria Grouping Operators" describes the selection criteria grouping operators. Table C.2. Selection Criteria Grouping Operators Grouping Operator Description ( ) Used for grouping statements [ ] Used to group strings into a string list (exact match) { } Used to group strings into a string list (subset match) Table C.3, "Selection Criteria Comparison Operators" describes the selection criteria comparison operators and the field types with which they can be used. Table C.3. Selection Criteria Comparison Operators Comparison Operator Description Field Type =~ Matching regular expression regex !~ Not matching regular expression. regex = Equal to number, size, percent, string, string list != Not equal to number, size, percent, string, string list >= Greater than or equal to number, size, percent > Greater than number, size, percent <= Less than or equal to number, size, percent < Less than number, size, percent Table C.4, "Selection Criteria Logical and Grouping Operators" describes the selection criteria logical and grouping operators. Table C.4. Selection Criteria Logical and Grouping Operators Logical and Grouping Operator Description && All fields must match , All fields must match (same as &&) || At least one field must match # At least one field must match (same as ||) ! Logical negation ( Left parenthesis (grouping operator) ) Right parenthesis (grouping operator) [ List start (grouping operator) ] List end (grouping operator) { List subset start (grouping operator) } List subset end (grouping operator) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/selection_operators |
function::user_int | function::user_int Name function::user_int - Retrieves an int value stored in user space Synopsis Arguments addr the user space address to retrieve the int from Description Returns the int value from a given user space address. Returns zero when user space data is not accessible. | [
"user_int:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-int |
Chapter 16. Control Bus | Chapter 16. Control Bus Only producer is supported The Control Bus from the EIP patterns allows for the integration system to be monitored and managed from within the framework. Use a Control Bus to manage an enterprise integration system. The Control Bus uses the same messaging mechanism used by the application data, but uses separate channels to transmit data that is relevant to the management of components involved in the message flow. In Camel you can manage and monitor using JMX, or by using a Java API from the CamelContext , or from the org.apache.camel.api.management package, or use the event notifier which has an example here. The ControlBus component provides easy management of Camel applications based on the Control Bus EIP pattern. For example, by sending a message to an Endpoint you can control the lifecycle of routes, or gather performance statistics. Where command can be any string to identify which type of command to use. 16.1. Commands Command Description route To control routes using the routeId and action parameter. language Allows you to specify a to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. 16.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 16.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 16.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 16.3. Component Options The Control Bus component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 16.4. Endpoint Options The Control Bus endpoint is configured using URI syntax: with the following path and query parameters: 16.4.1. Path Parameters (2 parameters) Name Description Default Type command (producer) Required Command can be either route or language. Enum values: route language String language (producer) Allows you to specify the name of a Language to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. Enum values: bean constant el exchangeProperty file groovy header jsonpath mvel ognl ref simple spel sql terser tokenize xpath xquery xtokenize Language 16.4.1.1. Query Parameters (6 parameters) Name Description Default Type action (producer) To denote an action that can be either: start, stop, or status. To either start or stop a route, or to get the status of the route as output in the message body. You can use suspend and resume from Camel 2.11.1 onwards to either suspend or resume a route. And from Camel 2.11.1 onwards you can use stats to get performance statics returned in XML format; the routeId option can be used to define which route to get the performance stats for, if routeId is not defined, then you get statistics for the entire CamelContext. The restart action will restart the route. Enum values: start stop suspend resume restart status stats String async (producer) Whether to execute the control bus task asynchronously. Important: If this option is enabled, then any result from the task is not set on the Exchange. This is only possible if executing tasks synchronously. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean loggingLevel (producer) Logging level used for logging when task is done, or if any exceptions occurred during processing the task. Enum values: TRACE DEBUG INFO WARN ERROR OFF INFO LoggingLevel restartDelay (producer) The delay in millis to use when restarting a route. 1000 int routeId (producer) To specify a route by its id. The special keyword current indicates the current route. String 16.5. Using route command The route command allows you to do common tasks on a given route very easily, for example to start a route, you can send an empty message to this endpoint: template.sendBody("controlbus:route?routeId=foo&action=start", null); To get the status of the route, you can do: String status = template.requestBody("controlbus:route?routeId=foo&action=status", null, String.class); 16.6. Getting performance statistics This requires JMX to be enabled (is by default) then you can get the performance statistics per route, or for the CamelContext. For example to get the statistics for a route named foo, we can do: String xml = template.requestBody("controlbus:route?routeId=foo&action=stats", null, String.class); The returned statistics is in XML format. Its the same data you can get from JMX with the dumpRouteStatsAsXml operation on the ManagedRouteMBean . To get statistics for the entire CamelContext you just omit the routeId parameter as shown below: String xml = template.requestBody("controlbus:route?action=stats", null, String.class); 16.7. Using Simple language You can use the Simple language with the control bus, for example to stop a specific route, you can send a message to the "controlbus:language:simple" endpoint containing the following message: template.sendBody("controlbus:language:simple", "USD{camelContext.getRouteController().stopRoute('myRoute')}"); As this is a void operation, no result is returned. However, if you want the route status you can do: String status = template.requestBody("controlbus:language:simple", "USD{camelContext.getRouteStatus('myRoute')}", String.class); It's easier to use the route command to control lifecycle of routes. The language command allows you to execute a language script that has stronger powers such as Groovy or to some extend the Simple language. For example to shutdown Camel itself you can do: template.sendBody("controlbus:language:simple?async=true", "USD{camelContext.stop()}"); We use async=true to stop Camel asynchronously as otherwise we would be trying to stop Camel while it was in-flight processing the message we sent to the control bus component. Note You can also use other languages such as Groovy , etc. 16.8. Spring Boot Auto-Configuration When using controlbus with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency> The component supports 3 options, which are listed below. Name Description Default Type camel.component.controlbus.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.controlbus.enabled Whether to enable auto configuration of the controlbus component. This is enabled by default. Boolean camel.component.controlbus.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"controlbus:command[?options]",
"controlbus:command:language",
"template.sendBody(\"controlbus:route?routeId=foo&action=start\", null);",
"String status = template.requestBody(\"controlbus:route?routeId=foo&action=status\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?routeId=foo&action=stats\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?action=stats\", null, String.class);",
"template.sendBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteController().stopRoute('myRoute')}\");",
"String status = template.requestBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteStatus('myRoute')}\", String.class);",
"template.sendBody(\"controlbus:language:simple?async=true\", \"USD{camelContext.stop()}\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-control-bus-component-starter |
function::raise | function::raise Name function::raise - raise a signal in the current thread Synopsis Arguments signo signal number Description This function calls the kernel send_sig routine on the current thread, with the given raw unchecked signal number. It may raise an error if send_sig failed. It requires guru mode. | [
"raise(signo:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-raise |
Configuring and managing networking | Configuring and managing networking Red Hat Enterprise Linux 9 Managing network interfaces and advanced networking features Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/index |
Working with distributed workloads | Working with distributed workloads Red Hat OpenShift AI Self-Managed 2.18 Use distributed workloads for faster and more efficient data processing and model training | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_distributed_workloads/index |
Chapter 2. Troubleshooting installation issues | Chapter 2. Troubleshooting installation issues To assist in troubleshooting a failed OpenShift Container Platform installation, you can gather logs from the bootstrap and control plane machines. You can also get debug information from the installation program. If you are unable to resolve the issue using the logs and debug information, see Determining where installation issues occur for component-specific troubleshooting. Note If your OpenShift Container Platform installation fails and the debug output or logs contain network timeouts or other connectivity errors, review the guidelines for configuring your firewall . Gathering logs from your firewall and load balancer can help you diagnose network-related errors. 2.1. Prerequisites You attempted to install an OpenShift Container Platform cluster and the installation failed. 2.2. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes. Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address> 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 2.3. Manually gathering logs with SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. Important By default, SSH access to the OpenShift Container Platform nodes is disabled on the Red Hat OpenStack Platform (RHOSP) based installations. Prerequisites You must have SSH access to your host(s). Procedure Collect the bootkube.service service logs from the bootstrap host using the journalctl command by running: USD journalctl -b -f -u bootkube.service Collect the bootstrap host's container logs using the podman logs. This is shown as a loop to get all of the container logs from the host: USD for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done Alternatively, collect the host's container logs using the tail command by running: # tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log Collect the kubelet.service and crio.service service logs from the master and worker hosts using the journalctl command by running: USD journalctl -b -f -u kubelet.service -u crio.service Collect the master and worker host container logs using the tail command by running: USD sudo tail -f /var/log/containers/* 2.4. Manually gathering logs without SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. If you do not have SSH access to your node, you can access the systems journal to investigate what is happening on your host. Prerequisites Your OpenShift Container Platform installation must be complete. Your API service is still functional. You have system administrator privileges. Procedure Access journald unit logs under /var/log by running: USD oc adm node-logs --role=master -u kubelet Access host file paths under /var/log by running: USD oc adm node-logs --role=master --path=openshift-apiserver 2.5. Getting debug information from the installation program You can use any of the following actions to get debug information from the installation program. Look at debug messages from a past installation in the hidden .openshift_install.log file. For example, enter: USD cat ~/<installation_directory>/.openshift_install.log 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . Change to the directory that contains the installation program and re-run it with --log-level=debug : USD ./openshift-install create cluster --dir <installation_directory> --log-level debug 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . 2.6. Reinstalling the OpenShift Container Platform cluster If you are unable to debug and resolve issues in the failed OpenShift Container Platform installation, consider installing a new OpenShift Container Platform cluster. Before starting the installation process again, you must complete thorough cleanup. For a user-provisioned infrastructure (UPI) installation, you must manually destroy the cluster and delete all associated resources. The following procedure is for an installer-provisioned infrastructure (IPI) installation. Procedure Destroy the cluster and remove all the resources associated with the cluster, including the hidden installer state files in the installation directory: USD ./openshift-install destroy cluster --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. Before reinstalling the cluster, delete the installation directory: USD rm -rf <installation_directory> Follow the procedure for installing a new OpenShift Container Platform cluster. Additional resources Installing an OpenShift Container Platform cluster | [
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"journalctl -b -f -u bootkube.service",
"for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done",
"tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log",
"journalctl -b -f -u kubelet.service -u crio.service",
"sudo tail -f /var/log/containers/*",
"oc adm node-logs --role=master -u kubelet",
"oc adm node-logs --role=master --path=openshift-apiserver",
"cat ~/<installation_directory>/.openshift_install.log 1",
"./openshift-install create cluster --dir <installation_directory> --log-level debug 1",
"./openshift-install destroy cluster --dir <installation_directory> 1",
"rm -rf <installation_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/validation_and_troubleshooting/installing-troubleshooting |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive |
4.6. Clustering | 4.6. Clustering luci will not function with Red Hat Enterprise Linux 5 clusters unless each cluster node has ricci version 0.12.2-14 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s04s06 |
Chapter 5. Postinstallation node tasks | Chapter 5. Postinstallation node tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks. 5.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Understand and work with RHEL compute nodes. 5.1.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.17, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 5.1.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base operating system: Use RHEL 8.8 or a later version with the minimal installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. For clusters installed on Microsoft Azure: Ensure the system includes the hardware requirement of a Standard_D8s_v3 virtual machine. Enable Accelerated Networking. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. Additional resources Deleting nodes Accelerated Networking for Microsoft Azure VMs 5.1.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.1.3. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.17 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.17: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.17-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 5.1.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.17: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.17-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 5.1.5. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.17 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 5.1.6. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 5.1.7. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. 5.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal. Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. 5.2.1. Prerequisites You installed a cluster on bare metal. You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . 5.2.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 5.2.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 5.2.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 5.2.5. Adding a new RHCOS worker node with a custom /var partition in AWS OpenShift Container Platform supports partitioning devices during installation by using machine configs that are processed during the bootstrap. However, if you use /var partitioning, the device name must be determined at installation and cannot be changed. You cannot add different instance types as nodes if they have a different device naming schema. For example, if you configured the /var partition with the default AWS device name for m4.large instances, dev/xvdb , you cannot directly add an AWS m5.large instance, as m5.large instances use a /dev/nvme1n1 device by default. The device might fail to partition due to the different naming schema. The procedure in this section shows how to add a new Red Hat Enterprise Linux CoreOS (RHCOS) compute node with an instance that uses a different device name from what was configured at installation. You create a custom user data secret and configure a new compute machine set. These steps are specific to an AWS cluster. The principles apply to other cloud deployments also. However, the device naming schema is different for other deployments and should be determined on a per-case basis. Procedure On a command line, change to the openshift-machine-api namespace: USD oc project openshift-machine-api Create a new secret from the worker-user-data secret: Export the userData section of the secret to a text file: USD oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt Edit the text file to add the storage , filesystems , and systemd stanzas for the partitions you want to use for the new node. You can specify any Ignition configuration parameters as needed. Note Do not change the values in the ignition stanza. { "ignition": { "config": { "merge": [ { "source": "https:...." } ] }, "security": { "tls": { "certificateAuthorities": [ { "source": "data:text/plain;charset=utf-8;base64,.....==" } ] } }, "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/nvme1n1", 1 "partitions": [ { "label": "var", "sizeMiB": 50000, 2 "startMiB": 0 3 } ] } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var", 4 "format": "xfs", 5 "path": "/var" 6 } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var\nWhat=/dev/disk/by-partlabel/var\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", "enabled": true, "name": "var.mount" } ] } } 1 Specifies an absolute path to the AWS block device. 2 Specifies the size of the data partition in Mebibytes. 3 Specifies the start of the partition in Mebibytes. When adding a data partition to the boot disk, a minimum value of 25000 MB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 4 Specifies an absolute path to the /var partition. 5 Specifies the filesystem format. 6 Specifies the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same. 7 Defines a systemd mount unit that mounts the /dev/disk/by-partlabel/var device to the /var partition. Extract the disableTemplating section from the work-user-data secret to a text file: USD oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt Create the new user data secret file from the two text files. This user data secret passes the additional node partition information in the userData.txt file to the newly created node. USD oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt Create a new compute machine set for the new node: Create a new compute machine set YAML file, similar to the following, which is configured for AWS. Add the required partitions and the newly-created user data secret: Tip Use an existing compute machine set as a template and change the parameters as needed for the new node. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4 1 Specifies a name for the new node. 2 Specifies an absolute path to the AWS block device, here an encrypted EBS volume. 3 Optional. Specifies an additional EBS volume. 4 Specifies the user data secret file. Create the compute machine set: USD oc create -f <file-name>.yaml The machines might take a few moments to become available. Verify that the new partition and nodes are created: Verify that the compute machine set is created: USD oc get machineset Example output NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1 1 This is the new compute machine set. Verify that the new node is created: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.30.3 ip-10-0-146-113.ec2.internal Ready master 127m v1.30.3 ip-10-0-153-35.ec2.internal Ready worker 118m v1.30.3 ip-10-0-176-58.ec2.internal Ready master 126m v1.30.3 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.30.3 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.30.3 ip-10-0-245-59.ec2.internal Ready worker 116m v1.30.3 1 This is new new node. Verify that the custom /var partition is created on the new node: USD oc debug node/<node-name> -- chroot /host lsblk For example: USD oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1 1 The nvme1n1 device is mounted to the /var partition. Additional resources For more information on how OpenShift Container Platform uses disk partitioning, see Disk partitioning . 5.3. Deploying machine health checks Understand and deploy machine health checks. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 5.3.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 5.3.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About control plane machine sets 5.3.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 5.3.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 5.3.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 5.3.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 5.3.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml 5.3.4. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io 5.3.5. Understanding the difference between compute machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 5.4. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 5.4.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . Note If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector must match the name of the custom machine config pool. For example, because the following custom machine config pool is named infra , the custom role must also be infra : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} # ... If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-kubelet-config 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Configure the worker nodes as needed: Create a YAML file similar to the following that contains the kubelet configuration: Important Kubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. For example: Use podPidsLimit to set the maximum number of PIDs in any pod. Use containerLogMaxSize to set the maximum size of the container log file before it is rotated. Use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verification Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-kubelet-config 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-kubelet-config -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 5.4.2. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Add the maxUnavailable field and set the value: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 5.4.3. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the Running phase. Operator Lifecycle Manager (OLM) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.17 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Note In OpenShift Container Platform 4.17, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 5.4.4. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 5.5. Huge pages Understand and configure huge pages. 5.5.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. 5.5.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 5.5.3. Configuring huge pages at boot time Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 5.6. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 5.6.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 5.6.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 5.6.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 5.7. Taints and tolerations Understand and work with taints and tolerations. 5.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint to a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 5.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 5.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 5.7.3. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 5.7.4. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 5.7.5. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 5.7.6. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 5.8. Topology Manager Understand and work with Topology Manager. 5.8.1. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 5.8.2. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 5.8.3. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. 5.9. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted. 5.10. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 5.10.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. By default, the installation process creates a Cluster Resource Override Operator pod on a worker node in the clusterresourceoverride-operator namespace. You can move this pod to another node, such as an infrastructure node, as needed. Infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see "Moving the Cluster Resource Override Operator pods". Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional: Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50 . 3 Optional: Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25 . 4 Optional: Specify the percentage to override the container memory limit, if used. Scaling 1 Gi of RAM at 100 percent is equal to 1 CPU core. This is processed before overriding the CPU request, if configured. The default is 200 . Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. By default, the installation process creates a Cluster Resource Override Operator pod on a worker node in the clusterresourceoverride-operator namespace. You can move this pod to another node, such as an infrastructure node, as needed. Infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see "Moving the Cluster Resource Override Operator pods". Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional: Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50 . 3 Optional: Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25 . 4 Optional: Specify the percentage to override the container memory limit, if used. Scaling 1 Gi of RAM at 100 percent is equal to 1 CPU core. This is processed before overriding the CPU request, if configured. The default is 200 . Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. By default, the installation process creates two Cluster Resource Override pods on the control plane nodes in the clusterresourceoverride-operator namespace. You can move these pods to other nodes, such as infrastructure nodes, as needed. Infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see "Moving the Cluster Resource Override Operator pods". Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional: Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50 . 2 Optional: Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25 . 3 Optional: Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed before overriding the CPU request, if configured. The default is 200 . Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 5.11. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 5.11.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 5.11.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 5.11.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 5.11.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 5.2. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 5.11.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 5.11.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 5.11.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority. You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 5.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 5.11.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 5.11.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 5.12. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 5.12.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure Create or edit the namespace object file. Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" <.> # ... <.> Setting this annotation to false disables overcommit for this namespace. 5.13. Freeing node resources using garbage collection Understand and use garbage collection. 5.13.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 5.3. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 5.13.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 5.4. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 5.13.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 5.14. Using the Node Tuning Operator Understand and use the Node Tuning Operator. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 5.14.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 5.14.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 5.14.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 5.14.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 5.15. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 5.16. Machine scaling with static IP addresses After you deployed your cluster to run nodes with static IP addresses, you can scale an instance of a machine or a machine set to use one of these static IP addresses. Additional resources Static IP addresses for vSphere nodes 5.16.1. Scaling machines to use static IP addresses You can scale additional machine sets to use pre-defined static IP addresses on your cluster. For this configuration, you need to create a machine resource YAML file and then define static IP addresses in this file. Prerequisites You deployed a cluster that runs at least one node with a configured static IP address. Procedure Create a machine resource YAML file and define static IP address network information in the network parameter. Example of a machine resource YAML file with static IP address information defined in the network parameter. apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: "" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {} 1 The IP address for the default gateway for the network interface. 2 Lists IPv4, IPv6, or both IP addresses that installation program passes to the network interface. Both IP families must use the same network interface for the default network. 3 Lists a DNS nameserver. You can define up to 3 DNS nameservers. Consider defining more than one DNS nameserver to take advantage of DNS resolution if that one DNS nameserver becomes unreachable. Create a machine custom resource (CR) by entering the following command in your terminal: USD oc create -f <file_name>.yaml 5.16.2. Machine set scaling of machines with configured static IP addresses You can use a machine set to scale machines with configured static IP addresses. After you configure a machine set to request a static IP address for a machine, the machine controller creates an IPAddressClaim resource in the openshift-machine-api namespace. The external controller then creates an IPAddress resource and binds any static IP addresses to the IPAddressClaim resource. Important Your organization might use numerous types of IP address management (IPAM) services. If you want to enable a particular IPAM service on OpenShift Container Platform, you might need to manually create the IPAddressClaim resource in a YAML definition and then bind a static IP address to this resource by entering the following command in your oc CLI: USD oc create -f <ipaddressclaim_filename> The following demonstrates an example of an IPAddressClaim resource: kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {} The machine controller updates the machine with a status of IPAddressClaimed to indicate that a static IP address has successfully bound to the IPAddressClaim resource. The machine controller applies the same status to a machine with multiple IPAddressClaim resources that each contain a bound static IP address.The machine controller then creates a virtual machine and applies static IP addresses to any nodes listed in the providerSpec of a machine's configuration. 5.16.3. Using a machine set to scale machines with configured static IP addresses You can use a machine set to scale machines with configured static IP addresses. The example in the procedure demonstrates the use of controllers for scaling machines in a machine set. Prerequisites You deployed a cluster that runs at least one node with a configured static IP address. Procedure Configure a machine set by specifying IP pool information in the network.devices.addressesFromPools schema of the machine set's YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: "8192" machine.openshift.io/vCPU: "4" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: "true" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - "192.168.204.1" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: "" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com 1 Specifies an IP pool, which lists a static IP address or a range of static IP addresses. The IP Pool can either be a reference to a custom resource definition (CRD) or a resource supported by the IPAddressClaims resource handler. The machine controller accesses static IP addresses listed in the machine set's configuration and then allocates each address to each machine. 2 Lists a nameserver. You must specify a nameserver for nodes that receive static IP address, because the Dynamic Host Configuration Protocol (DHCP) network configuration does not support static IP addresses. Scale the machine set by entering the following commands in your oc CLI: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api After each machine is scaled up, the machine controller creates an IPAddressClaim resource. Optional: Check that the IPAddressClaim resource exists in the openshift-machine-api namespace by entering the following command: USD oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api Example oc CLI output that lists two IP pools listed in the openshift-machine-api namespace NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool Create an IPAddress resource by entering the following command: USD oc create -f ipaddress.yaml The following example shows an IPAddress resource with defined network configuration information and one defined static IP address: apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23 1 The name of the target IPAddressClaim resource. 2 Details information about the static IP address or addresses from your nodes. Note By default, the external controller automatically scans any resources in the machine set for recognizable address pool types. When the external controller finds kind: IPPool defined in the IPAddress resource, the controller binds any static IP addresses to the IPAddressClaim resource. Update the IPAddressClaim status with a reference to the IPAddress resource: USD oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{"status":{"addressRef": {"name": "cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0"}}}' -n openshift-machine-api --subresource=status | [
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc project openshift-machine-api",
"oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }",
"oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4",
"oc create -f <file-name>.yaml",
"oc get machineset",
"NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.30.3 ip-10-0-146-113.ec2.internal Ready master 127m v1.30.3 ip-10-0-153-35.ec2.internal Ready worker 118m v1.30.3 ip-10-0-176-58.ec2.internal Ready master 126m v1.30.3 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.30.3 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.30.3 ip-10-0-245-59.ec2.internal Ready worker 116m v1.30.3",
"oc debug node/<node-name> -- chroot /host lsblk",
"oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api",
"oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines.machine.openshift.io",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-kubelet-config -o yaml",
"spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc new-project <project_name>",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2",
"NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m",
"oc debug node/perf-node.example.com",
"sh-4.2# systemctl status | grep -B5 pause",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope",
"for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus",
"oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {}",
"oc create -f <file_name>.yaml",
"oc create -f <ipaddressclaim_filename>",
"kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: \"8192\" machine.openshift.io/vCPU: \"4\" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: \"true\" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - \"192.168.204.1\" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api",
"NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool",
"oc create -f ipaddress.yaml",
"apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23",
"oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{\"status\":{\"addressRef\": {\"name\": \"cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0\"}}}' -n openshift-machine-api --subresource=status"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/post-install-node-tasks |
Chapter 7. Configuring Single Sign-On for the RHEL 8 web console in the IdM domain | Chapter 7. Configuring Single Sign-On for the RHEL 8 web console in the IdM domain You can use Single Sign-on (SSO) authentication provided by Identity Management (IdM) in the RHEL 8 web console to leverage the following advantages: IdM domain administrators can use the RHEL 8 web console to manage local machines. Users with a Kerberos ticket in the IdM domain do not need to provide login credentials to access the web console. All hosts known to the IdM domain are accessible via SSH from the local instance of the RHEL 8 web console. Certificate configuration is not necessary. The console's web server automatically switches to a certificate issued by the IdM certificate authority and accepted by browsers. Configuring SSO for logging into the RHEL web console requires to: Add machines to the IdM domain using the RHEL 8 web console. If you want to use Kerberos for authentication, you must obtain a Kerberos ticket on your machine. Allow administrators on the IdM server to run any command on any host. Prerequisites The RHEL web console installed on RHEL 8 systems. For details, see Installing the web console . IdM client installed on systems with the RHEL web console. For details, see IdM client installation . 7.1. Joining a RHEL 8 system to an IdM domain using the web console You can use the web console to join the Red Hat Enterprise Linux 8 system to the Identity Management (IdM) domain. Prerequisites The IdM domain is running and reachable from the client you want to join. You have the IdM domain administrator credentials. You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Configuration field of the Overview tab click Join Domain . In the Join a Domain dialog box, enter the host name of the IdM server in the Domain Address field. In the Domain administrator name field, enter the user name of the IdM administration account. In the Domain administrator password , add a password. Click Join . Verification If the RHEL 8 web console did not display an error, the system has been joined to the IdM domain and you can see the domain name in the System screen. To verify that the user is a member of the domain, click the Terminal page and type the id command: Additional resources Planning Identity Management Installing Identity Management Managing IdM users, groups, hosts, and access control rules 7.2. Logging in to the web console using Kerberos authentication Configure the RHEL 8 system to use Kerberos authentication. Important With SSO, you usually do not have any administrative privileges in the web console. This only works if you configure passwordless sudo. The web console does not interactively ask for a sudo password. Prerequisites IdM domain running and reachable in your company environment. For details, see Joining a RHEL 8 system to an IdM domain using the web console . You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . If the system does not use a Kerberos ticket managed by the SSSD client, request the ticket with the kinit utility manually. Procedure Log in to the RHEL web console by entering the following URL in your web browser: At this point, you are successfully connected to the RHEL web console and you can start with configuration. | [
"id euid=548800004(example_user) gid=548800004(example_user) groups=548800004(example_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"https:// <dns_name> :9090"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/configuring_single_sign_on_for_the_rhel_8_web_console_in_the_idm_domain_system-management-using-the-rhel-8-web-console |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 8 release of Eclipse Temurin includes, see OpenJDK 8u382 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 8.0.382 release: Support for GB18030-2022 The Chinese Electronics Standardization Institute (CESI) recently published GB18030-2022 as an update to the GB18030 standard, synchronizing the character set with Unicode 11.0. The GB18030-2022 standard is now the default GB18030 character set that OpenJDK 8.0.382 uses. However, this updated character set contains incompatible changes compared with GB18030-2000, which releases of OpenJDK 8 used. From OpenJDK 8.0.382 onward, if you want to use the version of the character set, ensure that the new system property jdk.charset.GB18030 is set to 2000 . See JDK-8301119 (JDK Bug System) . Additional characters for GB18030-2022 (Level 2) support allowed To support "Implementation Level 2" of the GB18030-2022 standard, OpenJDK must support the use of characters that are in the Chinese Japanese Korean (CJK) Unified Ideographs Extension E block of Unicode 8.0. Maintenance Release 5 of the Java SE 8 specification adds support for these characters, which OpenJDK 8.0.382 implements through the addition of a new UnicodeBlock instance, Character.CJK_UNIFIED_IDEOGRAPHS_EXTENSION_E . See JDK-8305681 (JDK Bug System) . Enhanced validation of JAR signature You can now configure the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file by setting a new system property, jdk.jar.maxSignatureFileSize . By default, the jdk.jar.maxSignatureFileSize property is set to 8000000 bytes (8 MB). JDK bug system reference ID: JDK-8300596. GTS root certificate authority (CA) certificates added In the OpenJDK 8.0.382 release, the cacerts truststore includes four Google Trust Services (GTS) root certificates: Certificate 1 Name: Google Trust Services LLC Alias name: gtsrootcar1 Distinguished name: CN=GTS Root R1, O=Google Trust Services LLC, C=US Certificate 2 Name: Google Trust Services LLC Alias name: gtsrootcar2 Distinguished name: CN=GTS Root R2, O=Google Trust Services LLC, C=US Certificate 3 Name: Google Trust Services LLC Alias name: gtsrootcar3 Distinguished name: CN=GTS Root R3, O=Google Trust Services LLC, C=US Certificate 4 Name: Google Trust Services LLC Alias name: gtsrootcar4 Distinguished name: CN=GTS Root R4, O=Google Trust Services LLC, C=US See JDK-8307134 (JDK Bug System) . Microsoft Corporation root CA certificates added In the OpenJDK 8.0.382 release, the cacerts truststore includes two Microsoft Corporation root certificates: Certificate 1 Name: Microsoft Corporation Alias name: microsoftecc2017 Distinguished name: CN=Microsoft ECC Root Certificate Authority 2017, O=Microsoft Corporation, C=US Certificate 2 Name: Microsoft Corporation Alias name: microsoftrsa2017 Distinguished name: CN=Microsoft RSA Root Certificate Authority 2017, O=Microsoft Corporation, C=US See JDK-8304760 (JDK Bug System) . TWCA root CA certificate added In the OpenJDK 8.0.382 release, the cacerts truststore includes the Taiwan Certificate Authority (TWCA) root certificate: Name: TWCA Alias name: twcaglobalrootca Distinguished name: CN=TWCA Global Root CA, OU=Root CA, O=TAIWAN-CA, C=TW See JDK-8305975 (JDK Bug System) . Revised on 2024-05-10 09:07:26 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.382_release_notes/openjdk-temurin-features-8.0.382_openjdk |
4.205. openmotif | 4.205. openmotif 4.205.1. RHBA-2011:1228 - openmotif bug fix update An updated openmotif package that fixes one bug is now available for Red Hat Enterprise Linux 6. The openmotif package includes the Motif shared libraries needed to run applications that are dynamically linked against Motif, as well as the Motif Window Manager (MWM). Bug Fix BZ# 584300 Previously, under certain circumstances, LabelGadget could have drawn over a parent window with the background color and, if using the Xft fonts, also over the text. With this update, the text and background drawing functionality has been fixed so that the aforementioned problems do not occur anymore. All users of openmotif are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/openmotif |
9.4. Threading Considerations | 9.4. Threading Considerations Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about those settings related to threading. max-threads Default is 64. The query engine has several settings that determine its thread utilization. max-threads sets the total number of threads available in the process pool for query engine work (such as processing plans, transaction control operations, and processing source queries). You should consider increasing the maximum threads on systems with a large number of available processors and/or when it is necessary to issue non-transactional queries involving a large number of concurrent source requests. max-active-plans Default is 20. This value should always be smaller than max-threads . By default, thread-count-for-source-concurrency is calculated by (max-threads / max_active_plans) * 2 to determine the threads available for processing concurrent source requests for each user query. Increasing the max-active-plans should be considered for workloads with a high number of long running queries and/or systems with a large number of available processors. If memory issues arise from increasing the max-threads and max-active-plans, then consider decreasing the amount of heap held by the buffer manager or decreasing the processor-batch-size to limit the base number of memory rows consumed by each plan. Increasing max-active-plans should be considered for workloads with a high number of long running queries and/or systems with a large number of available processors. If memory issues arise from increasing max-threads and max-active-plans , then consider decreasing buffer-service-processor-batch-size to limit the base number of memory rows consumed by each plan. thread-count-for-source-concurrency Default is 0. This value should always be smaller than max-threads . This property sets the number of concurrently executing source queries per user request. 0 indicates to use the default calculated value based on 2 * ( max-threads / max-active-plans ). Setting this to 1 forces serial execution of all source queries by the processing thread. Any number greater than 1 limits the maximum number of concurrently executing source requests accordingly. Using the defaults, each user request would be allowed 6 concurrently executing source queries. If the default calculated value is not applicable to your workload, for example, if you have queries that generate more concurrent long running source queries, you should adjust this value. Also see Section 9.6, "Transport Considerations" for max-socket-threads . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/threading_considerations1 |
4.257. qpid-cpp | 4.257. qpid-cpp 4.257.1. RHBA-2011:1670 - qpid-cpp bug fix and enhancement update Updated qpid-cpp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The qpid-cpp packages provide a message broker daemon that receives, stores, and routes messages using the open AMQP (Advanced Message Queuing Protocol) messaging protocol along with runtime libraries for AMQP client applications developed using Qpid C++. Clients exchange messages with an AMQP message broker using the AMQP protocol. The qpid-cpp package has been upgraded to upstream version 0.12, which provides numerous bug fixes and enhancements over the version. (BZ# 706949 ) Bug Fixes BZ# 695777 In the version of Red Hat Enterprise Linux, when an attempt to convert a negative value of a Variant Qpid type into an unsigned short type value was made, an exception was issued. In Red Hat Enterprise Linux 6, no exception was issued and the value was converted, e.g. "-5" became "65531". This bug has been fixed and the exception is now properly issued in the described scenario. BZ# 735058 Previously, non-static "isManagementMessage" class member was sometimes passed an uninitialized value. This bug has been fixed and only initialized values are now passed in the described scenario. BZ# 740912 The XML-Exchange library (as part of the qpid-cpp-server-xml package) is only available on x86, Intel 64, and AMD64 architectures. Previously, this caused additional dependencies on the xqilla and xerces-c packages to be added to the qpid-cpp RPM package. However, functionality of these two packages is not needed for the Matahari agent infrastructure. This update removes the dependency on these two packages for the PowerPC and IBM System z architectures. Enhancement BZ# 663461 Previously, qpid-cpp was only built for x86, Intel 64, and AMD64 architectures. This update adds support, which is needed to provide the Matahari agent infrastructure on PowerPC and IBM System z architectures. Users of qpid-cpp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/qpid-cpp |
6.2. Distributing the Directory Data | 6.2. Distributing the Directory Data Distributing the data allows the directory service to be scaled across multiple servers without physically containing those directory entries on each server in the enterprise. A distributed directory can therefore hold a much larger number of entries than would be possible with a single server. In addition, the directory service can be configured to hide the distribution details from the user. As far as users and applications are concerned, there is only a single directory that answers their directory queries. The following sections describe the mechanics of data distribution in more detail: Section 6.2.1, "About Using Multiple Databases" Section 6.2.2, "About Suffixes" 6.2.1. About Using Multiple Databases Directory Server stores data in LDBM databases. This a high-performance, disk-based database. Each database consists of a set of large files that contain all of the data assigned to it. Different portions of the directory tree can be stored in different databases. For example, Figure 6.1, "Storing Suffix Data in Separate Databases" shows three suffixes being stored in three separate databases. Figure 6.1. Storing Suffix Data in Separate Databases When the directory tree is divided between a number of databases, these databases can then be distributed across multiple servers. For example, if there are three databases, DB1, DB2, and DB3, to contain the three suffixes of the directory tree, they can be stored on two servers, Server A and Server B. Figure 6.2. Dividing Suffix Databases Between Separate Servers Server A contains DB1 and DB2, and Server B contains DB3. Distributing databases across multiple servers reduces the workload on each server. The directory service can therefore be scaled to a much larger number of entries than would be possible with a single server. In addition, Directory Server supports adding databases dynamically, which means that new databases can be added when the directory service needs them without taking the entire directory service off-line. 6.2.2. About Suffixes Each database contains the data within a specific suffix of the Directory Server. Both root and subsuffixes can be created to organize the contents of the directory tree. A root suffix is the entry at the top of a tree. It can be the root of the directory tree or part of a larger tree designed for the Directory Server. A subsuffix is a branch beneath a root suffix. The data for root and subsuffixes are contained by databases. For example, Example Corp. creates suffixes to represent the distribution of their directory data. Figure 6.3. Directory Tree for Example Corp. Example Corp. can spread their directory tree across five different databases, as in Figure 6.4, "Directory Tree Spread across Multiple Databases" . Figure 6.4. Directory Tree Spread across Multiple Databases The resulting suffixes would contain the following entries: Figure 6.5. Suffixes for a Distributed Directory Tree The dc=example,dc=com suffix is a root suffix. The ou=testing,dc=example,dc=com suffix, the ou=development,dc=example,dc=com suffix, and the ou=partners,ou=development,dc=example,dc=com suffix are all subsuffixes of the dc=example,dc=com root suffix. The root suffix dc=example,dc=com contains the data in the ou=marketing branch of the original directory tree. Using Multiple Root Suffixes The directory service might contain more than one root suffix. For example, an ISP called "Example" might host several websites, one for example_a.com and one for example_b.com. The ISP would create two root suffixes, one corresponding to the o=example_a.com naming context and one corresponding to the o=example_b.com naming context. Figure 6.6. Directory Tree with Multiple Root Suffixes The dc=example,dc=com entry represents a root suffix. The entry for each hosted customer is also a root suffix ( o=example_a and o=example_b ). The ou=people and the ou=groups branches are subsuffixes under each root suffix. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_topology-distributing_data |
Chapter 10. Configuring locking and concurrency | Chapter 10. Configuring locking and concurrency Data Grid uses multi-versioned concurrency control (MVCC) to improve access to shared data. Allowing concurrent readers and writers Readers and writers do not block one another Write skews can be detected and handled Internal locks can be striped 10.1. Locking and concurrency Multi-versioned concurrency control (MVCC) is a concurrency scheme popular with relational databases and other data stores. MVCC offers many advantages over coarse-grained Java synchronization and even JDK Locks for access to shared data. Data Grid's MVCC implementation makes use of minimal locks and synchronizations, leaning heavily towards lock-free techniques such as compare-and-swap and lock-free data structures wherever possible, which helps optimize for multi-CPU and multi-core environments. In particular, Data Grid's MVCC implementation is heavily optimized for readers. Reader threads do not acquire explicit locks for entries, and instead directly read the entry in question. Writers, on the other hand, need to acquire a write lock. This ensures only one concurrent writer per entry, causing concurrent writers to queue up to change an entry. To allow concurrent reads, writers make a copy of the entry they intend to modify, by wrapping the entry in an MVCCEntry . This copy isolates concurrent readers from seeing partially modified state. Once a write has completed, MVCCEntry.commit() will flush changes to the data container and subsequent readers will see the changes written. 10.1.1. Clustered caches and locks In Data Grid clusters, primary owner nodes are responsible for locking keys. For non-transactional caches, Data Grid forwards the write operation to the primary owner of the key so it can attempt to lock it. Data Grid either then forwards the write operation to the other owners or throws an exception if it cannot lock the key. Note If the operation is conditional and fails on the primary owner, Data Grid does not forward it to the other owners. For transactional caches, primary owners can lock keys with optimistic and pessimistic locking modes. Data Grid also supports different isolation levels to control concurrent reads between transactions. 10.1.2. The LockManager The LockManager is a component that is responsible for locking an entry for writing. The LockManager makes use of a LockContainer to locate/hold/create locks. LockContainers come in two broad flavours, with support for lock striping and with support for one lock per entry. 10.1.3. Lock striping Lock striping entails the use of a fixed-size, shared collection of locks for the entire cache, with locks being allocated to entries based on the entry's key's hash code. Similar to the way the JDK's ConcurrentHashMap allocates locks, this allows for a highly scalable, fixed-overhead locking mechanism in exchange for potentially unrelated entries being blocked by the same lock. The alternative is to disable lock striping - which would mean a new lock is created per entry. This approach may give you greater concurrent throughput, but it will be at the cost of additional memory usage, garbage collection churn, etc. Default lock striping settings lock striping is disabled by default, due to potential deadlocks that can happen if locks for different keys end up in the same lock stripe. The size of the shared lock collection used by lock striping can be tuned using the concurrencyLevel attribute of the <locking /> configuration element. Configuration example: <locking striping="false|true"/> Or new ConfigurationBuilder().locking().useLockStriping(false|true); 10.1.4. Concurrency levels In addition to determining the size of the striped lock container, this concurrency level is also used to tune any JDK ConcurrentHashMap based collections where related, such as internal to DataContainer s. Please refer to the JDK ConcurrentHashMap Javadocs for a detailed discussion of concurrency levels, as this parameter is used in exactly the same way in Data Grid. Configuration example: <locking concurrency-level="32"/> Or new ConfigurationBuilder().locking().concurrencyLevel(32); 10.1.5. Lock timeout The lock timeout specifies the amount of time, in milliseconds, to wait for a contented lock. Configuration example: <locking acquire-timeout="10000"/> Or new ConfigurationBuilder().locking().lockAcquisitionTimeout(10000); //alternatively new ConfigurationBuilder().locking().lockAcquisitionTimeout(10, TimeUnit.SECONDS); 10.1.6. Consistency The fact that a single owner is locked (as opposed to all owners being locked) does not break the following consistency guarantee: if key K is hashed to nodes {A, B} and transaction TX1 acquires a lock for K , let's say on A . If another transaction, TX2 , is started on B (or any other node) and TX2 tries to lock K then it will fail with a timeout as the lock is already held by TX1 . The reason for this is the that the lock for a key K is always, deterministically, acquired on the same node of the cluster, regardless of where the transaction originates. 10.1.7. Data Versioning Data Grid supports two forms of data versioning: simple and external. The simple versioning is used in transactional caches for write skew check. The external versioning is used to encapsulate an external source of data versioning within Data Grid, such as when using Data Grid with Hibernate which in turn gets its data version information directly from a database. In this scheme, a mechanism to pass in the version becomes necessary, and overloaded versions of put() and putForExternalRead() will be provided in AdvancedCache to take in an external data version. This is then stored on the InvocationContext and applied to the entry at commit time. Note Write skew checks cannot and will not be performed in the case of external data versioning. | [
"<locking striping=\"false|true\"/>",
"new ConfigurationBuilder().locking().useLockStriping(false|true);",
"<locking concurrency-level=\"32\"/>",
"new ConfigurationBuilder().locking().concurrencyLevel(32);",
"<locking acquire-timeout=\"10000\"/>",
"new ConfigurationBuilder().locking().lockAcquisitionTimeout(10000); //alternatively new ConfigurationBuilder().locking().lockAcquisitionTimeout(10, TimeUnit.SECONDS);"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/configuring_data_grid_caches/locking |
Chapter 6. Upgrading Data Grid clusters | Chapter 6. Upgrading Data Grid clusters Data Grid Operator lets you upgrade Data Grid clusters from one version to another without downtime or data loss. Important Hot Rod rolling upgrades are available as a technology preview feature. 6.1. Technology preview features Technology preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using technology preview features or capabilities for production. These features provide early access to upcoming product features, which enables you to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope . 6.2. Data Grid cluster upgrades The spec.upgrades.type field controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. There are two types of cluster upgrade: Shutdown Upgrades Data Grid clusters with service downtime. This is the default upgrade type. HotRodRolling Upgrades Data Grid clusters without service downtime. Shutdown upgrades To perform a shutdown upgrade, Data Grid Operator does the following: Gracefully shuts down the existing cluster. Removes the existing cluster. Creates a new cluster with the target version. Hot Rod rolling upgrades To perform a Hot Rod rolling upgrade, Data Grid Operator does the following: Creates a new Data Grid cluster with the target version that runs alongside your existing cluster. Creates a remote cache store to transfer data from the existing cluster to the new cluster. Redirects all clients to the new cluster. Removes the existing cluster when all data and client connections are transferred to the new cluster. Important You should not perform Hot Rod rolling upgrades with caches that enable passivation with persistent cache stores. In the event that the upgrade does not complete successfully, passivation can result in data loss when Data Grid Operator rolls back the target cluster. If your cache configuration enables passivation you should perform a shutdown upgrade. 6.3. Upgrading Data Grid clusters with downtime Upgrading Data Grid clusters with downtime results in service disruption but does not require any additional capacity. Prerequisites The Data Grid Operator version you have installed supports the Data Grid target version. If required, configure a persistent cache store to preserve your data during the upgrade. Important At the start of the upgrade process Data Grid Operator shuts down your existing cluster. This results in data loss if you do not configure a persistent cache store. Procedure Specify the Data Grid version number in the spec.version field. Ensure that Shutdown is set as the value for the spec.upgrades.type field, which is the default. Apply your changes, if necessary. When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade. 6.4. Performing Hot Rod rolling upgrades for Data Grid clusters Performing Hot Rod rolling upgrades lets you move to a new Data Grid version without service disruption. However, this upgrade type requires additional capacity and temporarily results in two Data Grid clusters with different versions running concurrently. Prerequisite The Data Grid Operator version you have installed supports the Data Grid target version. Procedure Specify the Data Grid version number in the spec.version field. Specify HotRodRolling as the value for the spec.upgrades.type field. Apply your changes. When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade. 6.4.1. Recovering from a failed Hot Rod rolling upgrade You can roll back a failed Hot Rod rolling upgrade to the version if the original cluster is still present. Prerequisites Hot Rod rolling upgrade is in progress and the initial Data Grid cluster is present. Procedure Ensure the Hot Rod rolling upgrade is in progress. The status.hotRodRollingUpgradeStatus field must be present. Update spec.version field of your Infinispan CR to the original cluster version defined in the status.hotRodRollingUpgradeStatus . Data Grid Operator deletes the newly created cluster. | [
"spec: version: 8.4.6-1 upgrades: type: Shutdown",
"spec: version: 8.4.6-1 upgrades: type: HotRodRolling",
"get infinispan <cr_name> -o yaml"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/upgrading-clusters |
Chapter 4. Ansible IPMI modules in RHEL | Chapter 4. Ansible IPMI modules in RHEL 4.1. The rhel_mgmt collection The Intelligent Platform Management Interface (IPMI) is a specification for a set of standard protocols to communicate with baseboard management controller (BMC) devices. The IPMI modules allow you to enable and support hardware management automation. The IPMI modules are available in: The rhel_mgmt Collection. The package name is ansible-collection-redhat-rhel_mgmt . The RHEL 7.9 AppStream, as part of the new ansible-collection-redhat-rhel_mgmt package. The following IPMI modules are available in the rhel_mgmt collection: ipmi_boot : Management of boot device order ipmi_power : Power management for machine The mandatory parameters used for the IPMI Modules are: ipmi_boot parameters: Module name Description name Hostname or ip address of the BMC password Password to connect to the BMC bootdev Device to be used on boot * network * floppy * hd * safe * optical * setup * default User Username to connect to the BMC ipmi_power parameters: Module name Description name BMC Hostname or IP address password Password to connect to the BMC user Username to connect to the BMC State Check if the machine is on the desired status * on * off * shutdown * reset * boot 4.2. Installing the rhel mgmt Collection using the CLI You can install the rhel_mgmt Collection using the command line. Prerequisites The ansible-core package is installed. Procedure Install the collection via RPM package: # yum install ansible-collection-redhat-rhel_mgmt After the installation is finished, the IPMI modules are available in the redhat.rhel_mgmt Ansible collection. Additional resources The ansible-playbook man page. 4.3. Example using the ipmi_boot module The following example shows how to use the ipmi_boot module in a playbook to set a boot device for the boot. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed in one of the following locations: The host where you execute the playbook. The managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook instead. The IPMI BMC that you want to control is accessible via network from the host where you execute the playbook, or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the host where the module is executing (the Ansible managed host), as the module contacts the BMC over the network using the IPMI protocol. You have credentials to access BMC with an appropriate level of access. Procedure Create a new playbook.yml file with the following content: --- - name: Sets which boot device will be used on boot hosts: localhost tasks: - redhat.rhel_mgmt.ipmi_boot: name: bmc.host.example.com user: admin_user password: basics bootdev: hd Execute the playbook against localhost: As a result, the output returns the value "success". 4.4. Example using the ipmi_power module This example shows how to use the ipmi_boot module in a playbook to check if the system is turned on. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed in one of the following locations: The host where you execute the playbook. The managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook instead. The IPMI BMC that you want to control is accessible via network from the host where you execute the playbook, or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the host where the module is executing (the Ansible managed host), as the module contacts the BMC over the network using the IPMI protocol. You have credentials to access BMC with an appropriate level of access. Procedure Create a new playbook.yml file with the following content: --- - name: Turn the host on hosts: localhost tasks: - redhat.rhel_mgmt.ipmi_power: name: bmc.host.example.com user: admin_user password: basics state: on Execute the playbook: The output returns the value "true". | [
"yum install ansible-collection-redhat-rhel_mgmt",
"--- - name: Sets which boot device will be used on next boot hosts: localhost tasks: - redhat.rhel_mgmt.ipmi_boot: name: bmc.host.example.com user: admin_user password: basics bootdev: hd",
"ansible-playbook playbook.yml",
"--- - name: Turn the host on hosts: localhost tasks: - redhat.rhel_mgmt.ipmi_power: name: bmc.host.example.com user: admin_user password: basics state: on",
"ansible-playbook playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/assembly_ansible-ipmi-modules-in-rhel_automating-system-administration-by-using-rhel-system-roles |
Chapter 9. Broker Clusters | Chapter 9. Broker Clusters You can connect brokers together to form a cluster. Broker clusters enable you to distribute message processing load and balance client connections. They also provide fault tolerance by increasing the number of brokers to which clients can connect. 9.1. Broker Clustering Changes In AMQ Broker 7, broker networks are called broker clusters. The brokers in the cluster are connected by cluster connections (which reference connector elements). Members of a cluster can be configured to discover each other dynamically (using UDP or JGroups), or statically (by manually specifying a list of cluster members). A cluster configuration is a required prerequisite for high-availability (HA). You must configure the cluster before you can configure HA, even if the cluster consists of only a single live broker. You can configure broker clusters in many different topologies, though symmetric and chain clusters are the most common. Regardless of the topology, you can scale clusters up and down without message loss (as long as you have configured the broker to send its messages to another broker in the cluster). Broker clusters distribute (and redistribute) messages differently than broker networks in AMQ 6. In AMQ 6, messages always arrived on a specific queue and were then pulled from one broker to another based on consumer interest. In AMQ Broker 7, queue definitions and consumers are shared across the cluster, and messages are routed across the cluster as they are received at the broker. Important Do not attempt to combine AMQ 6 brokers and AMQ Broker 7 brokers in the same cluster. 9.2. How Broker Clusters are Configured You configure a broker cluster by creating a broker instance for each member of the cluster, and then adding the cluster settings to each broker instance. Cluster settings consist of the following: Discovery groups For use with dynamic discovery, a discovery group defines how the broker instance discovers other members in the cluster. Discovery can use either UDP or JGroups. Broadcast groups For use with dynamic discovery, a broadcast group defines how the broker instance transmits cluster-related information to other members in the cluster. Broadcast can use either UDP or JGroups, but it must match its discovery groups counterpart. Cluster connections How the broker instance should connect to other members of the cluster. You can specify a discovery group or a static list of cluster members. You can also specify message redistribution and max hop properties. 9.2.1. Creating a Broker Cluster This procedure demonstrates how to create a basic, two-broker cluster with static discovery. Procedure Create the first broker instance by using the artemis create command. This example creates a new broker instance called broker1 . Create a second broker instance for the second member of the cluster. For each additional broker instance, you should use the --port-offset parameter to avoid port collisions with the broker instances. This example creates a second broker instance called broker2 . For the first broker instance, open the BROKER_INSTANCE_DIR /etc/broker.xml configuration file and add the cluster settings. For static discovery, you must add a connector and a static cluster connection. This example configures broker1 to connect to broker2 . <!-- Connectors --> <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <!-- connector to broker2 --> <connector name="broker2-connector">tcp://localhost:61617</connector> </connectors> <!-- Clustering configuration --> <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors> <connector-ref>broker2-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> For the second broker instance, open the BROKER_INSTANCE_DIR /etc/broker.xml configuration file and add the cluster settings. This example configures broker2 to connect to broker1 . <!-- Connectors --> <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> <!-- connector to broker1 --> <connector name="broker1-connector">tcp://localhost:61616</connector> </connectors> <!-- Clustering configuration --> <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors> <connector-ref>broker1-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> Related Information For full details about creating broker clusters and configuring message redistribution and client load balancing, see Setting up a broker cluster in Configuring AMQ Broker . 9.2.2. Additional Broker Cluster Topologies Broker clusters can be connected in many different topologies. In AMQ Broker 7, symmetric and chain clusters are the most common. Example: Symmetric Cluster In a full mesh topology, each broker is connected to every other broker in the cluster. This means that every broker in the cluster is no more than one hop away from every other broker. This example uses dynamic discovery to enable the brokers in the cluster to discover each other. By setting max-hops to 1 , each broker will connect to every other broker: <!-- Clustering configuration --> <broadcast-groups> <broadcast-group name="my-broadcast-group"> <group-address>USD{udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <broadcast-period>100</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="my-discovery-group"> <group-address>USD{udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>ON_DEMAND</message-load-balancing> <max-hops>1</max-hops> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> Example: Chain Cluster In a chain cluster, the brokers form a linear "chain" with a broker on each end and all other brokers connecting to the and brokers in the chain (for example, A->B->C). This example uses static discovery to connect three brokers into a chain cluster. Each broker connects to the broker in the chain, and max-hops is set to 2 to enable messages to flow through the full chain. The first broker is configured like this: <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <!-- connector to broker2 --> <connector name="broker2-connector">tcp://localhost:61716</connector> </connectors> <cluster-connections> <cluster-connection name="my-cluster"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>2</max-hops> <static-connectors allow-direct-connections-only="true"> <connector-ref>broker2-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> The second broker is configured like this: <connectors> <connector name="netty-connector">tcp://localhost:61716</connector> <!-- connector to broker3 --> <connector name="broker3-connector">tcp://localhost:61816</connector> </connectors> <cluster-connections> <cluster-connection name="my-cluster"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors allow-direct-connections-only="true"> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> Finally, the third broker is configured like this: <connectors> <connector name="netty-connector">tcp://localhost:61816</connector> </connectors> <cluster-connections> <cluster-connection name="my-cluster"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>0</max-hops> </cluster-connection> </cluster-connections> 9.3. Broker Cluster Configuration Properties The following table compares the broker network configuration properties in AMQ 6 to the equivalent cluster-connection properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 Excluded destinations excludedDestinations No equivalent. The number of hops that a message can make through the cluster networkTTL The default is 1 , which means that a message can make just one hop to a neighboring broker. <max-hops> Sets this broker instance to load balance messages to brokers which might be connected to it indirectly with other brokers are intermediaries in a chain. The default is 1 , which means that messages are distributed only to other brokers directly connected to this broker instance. Replay messages when there are no consumers replayWhenNoConsumers No equivalent. However, you can set <redistribution-delay> to define the amount of time with no consumers (in milliseconds) after which messages should be redelivered as though arriving for the first time. Whether to broadcast advisory messages for temporary destinations in the cluster bridgeTempDestinations The default is true . This property was typically used for temporary destinations created for request-reply messages. This would enable consumers of these messages to be connected to another broker in the network and still be able to send the reply to the temporary destination specified in the JMSReplyTo header. No equivalent. In AMQ Broker 7, temporary destinations are never clustered. The credentials to use to authenticate this broker with a remote broker userName password <cluster-user> <cluster-password> Set the route priority for a connector decreaseNetworkConsumerPriority The default is false . If set to true , local consumers have a priority of 0 , and network subscriptions have a priority of -5 . In addition, the priority of a network subscription is reduced by 1 for every network hop that it traverses. No equivalent. Whether and how messages should be distributed between other brokers in the cluster No equivalent. <message-load-balancing> This can be set to OFF (no load balancing), STRICT (forward messages to all brokers in the cluster that have a matching queue), or ON_DEMAND (forward messages only to brokers in the cluster that have active consumers or a matching selector). The default is ON_DEMAND . Enable a cluster network connection to both produce and consume messages duplex By default, network connectors are unidirectional. However, you could set them to duplex to enable messages to flow in both directions. This was typically used for hub-and-spoke networks in which the hub was behind a firewall. No equivalent. Cluster connections are unidirectional only. However, you can configure a pair of cluster connections between each broker, one from each end. For more information about setting up a broker cluster, see Setting up a broker cluster in Configuring AMQ Broker . | [
"sudo INSTALL_DIR /bin/artemis create broker1 --user user --password pass --role amq",
"sudo INSTALL_DIR /bin/artemis create broker2 --port-offset 100 --user user --password pass --role amq",
"<!-- Connectors --> <connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <!-- connector to broker2 --> <connector name=\"broker2-connector\">tcp://localhost:61617</connector> </connectors> <!-- Clustering configuration --> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors> <connector-ref>broker2-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections>",
"<!-- Connectors --> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> <!-- connector to broker1 --> <connector name=\"broker1-connector\">tcp://localhost:61616</connector> </connectors> <!-- Clustering configuration --> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors> <connector-ref>broker1-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections>",
"<!-- Clustering configuration --> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <group-address>USD{udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <broadcast-period>100</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <group-address>USD{udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>ON_DEMAND</message-load-balancing> <max-hops>1</max-hops> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections>",
"<connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <!-- connector to broker2 --> <connector name=\"broker2-connector\">tcp://localhost:61716</connector> </connectors> <cluster-connections> <cluster-connection name=\"my-cluster\"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>2</max-hops> <static-connectors allow-direct-connections-only=\"true\"> <connector-ref>broker2-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections>",
"<connectors> <connector name=\"netty-connector\">tcp://localhost:61716</connector> <!-- connector to broker3 --> <connector name=\"broker3-connector\">tcp://localhost:61816</connector> </connectors> <cluster-connections> <cluster-connection name=\"my-cluster\"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <static-connectors allow-direct-connections-only=\"true\"> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections>",
"<connectors> <connector name=\"netty-connector\">tcp://localhost:61816</connector> </connectors> <cluster-connections> <cluster-connection name=\"my-cluster\"> <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>0</max-hops> </cluster-connection> </cluster-connections>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/migrating_to_red_hat_amq_7/broker_clusters |
32.5. Package Selection | 32.5. Package Selection Warning You can use a kickstart file to install every available package by specifying * in the %packages section. Red Hat does not support this type of installation. In releases of Red Hat Enterprise Linux, this functionality was provided by @Everything , but this option is not included in Red Hat Enterprise Linux 6. Use the %packages command to begin a kickstart file section that lists the packages you would like to install (this is for installations only, as package selection during upgrades is not supported). You can specify packages by group or by their package names. The installation program defines several groups that contain related packages. Refer to the variant /repodata/comps-*.xml file on the Red Hat Enterprise Linux 6.9 Installation DVD for a list of groups. Each group has an id, user visibility value, name, description, and package list. If the group is selected for installation, the packages marked mandatory in the package list are always installed, the packages marked default are installed if they are not specifically excluded elsewhere, and the packages marked optional must be specifically included elsewhere even when the group is selected. Specify groups, one entry to a line, starting with an @ symbol, a space, and then the full group name or group id as given in the comps.xml file. For example: Note that the Core and Base groups are always selected by default, so it is not necessary to specify them in the %packages section. Warning When performing a minimal installation using the @Core group, the firewall ( iptables / ip6tables ) will not be configured on the installed system. This presents a security risk. To work around this issue, add the authconfig and system-config-firewall-base packages to your package selection as described below. The firewall will be configured properly if these packages are present. A minimal installation's %packages section which will also configure the firewall will look similar to the following: See the Red Hat Customer Portal for details. Specify individual packages by name, one entry to a line. You can use asterisks as wildcards to glob package names in entries. For example: The docbook* entry includes the packages docbook-dtds , docbook-simple , docbook-slides and others that match the pattern represented with the wildcard. Use a leading dash to specify packages or groups to exclude from the installation. For example: Important To install a 32-bit package on a 64-bit system, you will need to append the package name with the 32-bit architecture the package was built for. For example: Using a kickstart file to install every available package by specifying * will introduce package and file conflicts onto the installed system. Packages known to cause such problems are assigned to the @Conflicts ( variant ) group, where variant is Client , ComputeNode , Server or Workstation . If you specify * in a kickstart file, be sure to exclude @Conflicts ( variant ) or the installation will fail: Note that Red Hat does not support the use of * in a kickstart file, even if you exclude @Conflicts ( variant ) . The section must end with the %end command. The following options are available for the %packages option: --nobase Do not install the @Base group. Use this option to perform a minimal installation, for example, for a single-purpose server or desktop appliance. --nocore Disables installation of the @Core package group which is otherwise always installed by default. Disabling the @Core package group with --nocore should be only used for creating lightweight containers; installing a desktop or server system with --nocore will result in an unusable system. Note Using -@Core to exclude packages in the @Core package group does not work. The only way to exclude the @Core package group is with the --nocore option. The @Core package group is defined as a minimal set of packages needed for installing a working system. It is not related in any way to core packages as defined in the Package Manifest and Scope of Coverage Details . --ignoredeps The --ignoredeps option has been deprecated. Dependencies are resolved automatically every time now. --ignoremissing Ignore the missing packages and groups instead of halting the installation to ask if the installation should be aborted or continued. For example: | [
"%packages @X Window System @Desktop @Sound and Video",
"%packages @Core authconfig system-config-firewall-base",
"sqlite curl aspell docbook*",
"-@ Graphical Internet -autofs -ipa*fonts",
"glibc.i686",
"* -@Conflicts (Server)",
"%packages --ignoremissing"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-packageselection |
Chapter 60. project | Chapter 60. project This chapter describes the commands under the project command. 60.1. project cleanup Clean resources associated with a project Usage: Table 60.1. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --created-before <YYYY-MM-DDTHH24:MI:SS> Drop resources created before the given time --updated-before <YYYY-MM-DDTHH24:MI:SS> Drop resources updated before the given time --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 60.2. project create Create new project Usage: Table 60.2. Positional arguments Value Summary <project-name> New project name Table 60.3. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning the project (name or id) --parent <project> Parent of the project (name or id) --description <description> Project description --enable Enable project --disable Disable project --property <key=value> Add a property to <name> (repeat option to set multiple properties) --or-show Return existing project --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) Table 60.4. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 60.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.6. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.3. project delete Delete project(s) Usage: Table 60.8. Positional arguments Value Summary <project> Project(s) to delete (name or id) Table 60.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) 60.4. project list List projects Usage: Table 60.10. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter projects by <domain> (name or id) --parent <parent> Filter projects whose parent is <parent> (name or id) --user <user> Filter projects by <user> (name or id) --my-projects List projects for the authenticated user. supersedes other filters. --long List additional fields in output --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc), repeat this option to specify multiple keys and directions. --tags <tag>[,<tag>,... ] List projects which have all given tag(s) (comma- separated list of tags) --tags-any <tag>[,<tag>,... ] List projects which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude projects which have all given tag(s) (comma- separated list of tags) --not-tags-any <tag>[,<tag>,... ] Exclude projects which have any given tag(s) (comma- separated list of tags) Table 60.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 60.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 60.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.5. project purge Clean resources associated with a project Usage: Table 60.15. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --keep-project Clean project resources, but don't delete the project --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 60.6. project set Set project properties Usage: Table 60.16. Positional arguments Value Summary <project> Project to modify (name or id) Table 60.17. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set project name --domain <domain> Domain owning <project> (name or id) --description <description> Set project description --enable Enable project --disable Disable project --property <key=value> Set a property on <project> (repeat option to set multiple properties) --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) --clear-tags Clear tags associated with the project. specify both --tag and --clear-tags to overwrite current tags --remove-tag <tag> Tag to be deleted from the project (repeat option to delete multiple tags) 60.7. project show Display project details Usage: Table 60.18. Positional arguments Value Summary <project> Project to display (name or id) Table 60.19. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) --parents Show the project's parents as a list --children Show project's subtree (children) as a list Table 60.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 60.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack project cleanup [-h] [--dry-run] (--auth-project | --project <project>) [--created-before <YYYY-MM-DDTHH24:MI:SS>] [--updated-before <YYYY-MM-DDTHH24:MI:SS>] [--project-domain <project-domain>]",
"openstack project create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parent <project>] [--description <description>] [--enable | --disable] [--property <key=value>] [--or-show] [--immutable | --no-immutable] [--tag <tag>] <project-name>",
"openstack project delete [-h] [--domain <domain>] <project> [<project> ...]",
"openstack project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>] [--parent <parent>] [--user <user>] [--my-projects] [--long] [--sort <key>[:<direction>]] [--tags <tag>[,<tag>,...]] [--tags-any <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-tags-any <tag>[,<tag>,...]]",
"openstack project purge [-h] [--dry-run] [--keep-project] (--auth-project | --project <project>) [--project-domain <project-domain>]",
"openstack project set [-h] [--name <name>] [--domain <domain>] [--description <description>] [--enable | --disable] [--property <key=value>] [--immutable | --no-immutable] [--tag <tag>] [--clear-tags] [--remove-tag <tag>] <project>",
"openstack project show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parents] [--children] <project>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/project |
Chapter 4. Reviewing and resolving migration issues | Chapter 4. Reviewing and resolving migration issues You can review and resolve migration issues identified by the MTA plugin in the left pane. 4.1. Reviewing issues You can use the MTA plugin icons to prioritize issues based on their severity. You can see which issues have a Quick Fix automatic code replacement and which do not. The results of an analysis are displayed in a directory format, showing the hints and classifications for each application analyzed. A hint is a read-only snippet of code that contains a single issue that you should or must address before you can modernize or migrate an application. Often a Quick Fix is suggested, which you can accept or ignore. A classification is a file that has an issue but does not have any suggested Quick Fixes. You can edit a classification. Procedure In the Migration Toolkit for Applications view, select a run configuration directory in the left pane. Click Results . The modules and applications of the run configuration are displayed, with hints and classifications beneath each application. Prioritize issues based on the following icons, which are displayed to each hint: : You must fix this issue in order to migrate or modernize the application. : You might need to fix this issue in order to migrate or modernize the application Optional: To learn more about a hint, right-click it and select Show More Details . 4.2. Resolving issues You can resolve issues by doing one of the following: Using a Quick Fix to fix a code snippet that has a hint Editing the code of a file that appears in a classification 4.2.1. Using a Quick Fix You can use a Quick Fix automatic code replacement to save time and ensure consistency in resolving repetitive issues. Quick Fixes are available for many issues displayed in the Hints section of the Results directory. Procedure In the left pane, click a hint that has an error indicator. Any Quick Fixes are displayed as child folders with the Quick Fix icon ( ) on their left side. Right-click a Quick Fix and select Preview Quick Fix . The current code and the suggested change are displayed in the Preview Quick Fix window. To accept the suggested fix, click Apply Quick Fix . Optional: Right-click the issue and select Mark As Complete . A green check ( ) is displayed by the hint, replacing the error indicator. 4.2.2. Editing the code of a file You can directly edit a file displayed in the Classifications section of the Results directory. These files do not have any Quick Fixes. Procedure In the left pane, click the file you want to edit. Make any changes needed to the code and save the file. Optional: Right-click the issue and select Mark as Complete or Delete . If you select Mark as Complete , a green check ( ) is displayed by the hint, replacing the error indicator. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/intellij_idea_plugin_guide/reviewing-and-resolving-migration-issues |
Chapter 4. Managing organizations in automation controller | Chapter 4. Managing organizations in automation controller An organization is a logical collection of users, teams, projects, and inventories. It is the highest level object in the controller object hierarchy. After you have created an organization, automation controller displays the organization details. You can then manage access and execution environments for the organization. 4.1. Reviewing the organization The Organizations page displays the existing organizations for your installation. Procedure From the navigation panel, select Access Organizations . Note Automation controller automatically creates a default organization. If you have a Self-support level license, you have only the default organization available and must not delete it. You can use the default organization as it is initially set up and edit it later. Note Only Enterprise or Premium licenses can add new organizations. Enterprise and Premium license users who want to add a new organization should see the Organizations section in the Automation controller User Guide . 4.2. Editing an organization During initial setup, you can leave the default organization as it is, but you can edit it later. Procedure Edit the organization by using one of these methods: From the organizations Details page, click Edit to the organization you want to modify. From the navigation panel, select Access Organizations . Select the organization you want to modify and edit the appropriate details. Save your changes. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/assembly-controller-organizations |
4.2. Automatic Installation | 4.2. Automatic Installation This section describes a simple procedure on how to add a Kickstart file to the installation USB drive, which automatically installs and registers Red Hat Enterprise Linux. You can use this procedure to deploy Red Hat Enterprise Linux on multiple machines. Generating the USB Boot Media Record an installation in a Kickstart file: Manually install Red Hat Enterprise Linux once. For details see Section 4.1, "Interactive Installation" . Boot the installed system. During the installation, Anaconda created a Kickstart file with the settings in the /root/anaconda-ks.cfg file. Download the Red Hat Enterprise Linux installation DVD ISO file to the /tmp/ directory. Mount the installation ISO file to the /mnt/ directory. For example: Create a working directory and copy the DVD content to it. For example: Unmount the ISO file: Copy the Kickstart file generated during the installation to the working directory: To register Red Hat Enterprise Linux after the installation automatically and attach a subscription, append the following to the /root/rhel-install/anaconda-ks.cfg file: Display the installation DVD volume name: Add a new menu entry to the boot /root/rhel-install/isolinux/isolinux.cfg file that uses the Kickstart file. For example: Note Set the inst.stage2=hd:LABEL= and inst.ks=hd:LABEL= options to the DVD volume name retrieved in the step. Before you create the /root/rhel-ks.iso file from the working directory, execute the following steps for a USB UEFI boot or for a CDROM UEFI boot : For a USB UEFI boot , follow the steps: Mount the volume: Edit the file /mnt/EFI/BOOT/grub.cfg : Add a new menu entry: Unmount the volume: For a CDROM UEFI boot , follow the steps: Edit the file /root/rhel-install/EFI/BOOT/grub.cfg : Add a new menu entry to the file: Create the /root/rhel-ks.iso file from the working directory: Note Set the -V option to the DVD volume name retrieved in an earlier step and replace \x20 in the string with a space. Make the ISO image created by the command `mkisofs` bootable: Create an installation USB drive. For details, see Section 3.2.1, "Making Installation USB Media on Linux" . Install Red Hat Enterprise Linux Using the Kickstart File Boot the installation USB drive. See Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems . Select the entry with the Kickstart configuration that you created in Section 4.2, "Automatic Installation" . | [
"mount -o loop /tmp/rhel-server-7.3-x86_64-dvd.iso /mnt/",
"mkdir /root/rhel-install/ shopt -s dotglob cp -avRf /mnt/* /root/rhel-install/",
"umount /mnt/",
"cp /root/anaconda-ks.cfg /root/rhel-install/",
"%post subscription-manager register --auto-attach --username= user_name --password= password %end",
"isoinfo -d -i rhel-server-7.3-x86_64-dvd.iso | grep \"Volume id\" | sed -e 's/Volume id: //' -e 's/ /\\\\x20/g' RHEL-7.3\\x20Server.x86_64",
"####################################### label kickstart menu label ^Kickstart Installation of RHEL7.3 kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-7.3\\x20Server.x86_64 inst.ks=hd:LABEL=RHEL-7.3\\x20Server.x86_64:/anaconda-ks.cfg #######################################",
"mount /root/rhel-install/images/efiboot.img /mnt/",
"####################################### 'Kickstart Installation of RHEL-7.3' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-7.3\\x20Server.x86_64 inst.ks=hd:LABEL=RHEL-7.3\\x20Server.x86_64:/anaconda-ks.cfg initrdefi /images/pxeboot/initrd.img } #######################################",
"umount /mnt",
"####################################### 'Kickstart Installation of RHEL-7.3' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-7.3\\x20Server.x86_64 inst.ks=hd:LABEL=RHEL-7.3\\x20Server.x86_64:/anaconda-ks.cfg initrdefi /images/pxeboot/initrd.img } #######################################",
"mkisofs -untranslated-filenames -volid \"RHEL-7.3 Server.x86_64\" -J -joliet-long -rational-rock -translation-table -input-charset utf-8 -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o /root/rhel-ks.iso -graft-points /root/rhel-install/",
"isohybrid --uefi /root/rhel-ks.iso"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-simple-install-kickstart |
3.7. Quotas and Service Level Agreement Policy | 3.7. Quotas and Service Level Agreement Policy 3.7.1. Introduction to Quota Quota is a resource limitation tool provided with Red Hat Virtualization. Quota may be thought of as a layer of limitations on top of the layer of limitations set by User Permissions. Quota is a data center object. Quota allows administrators of Red Hat Virtualization environments to limit user access to memory, CPU, and storage. Quota defines the memory resources and storage resources an administrator can assign users. As a result users may draw on only the resources assigned to them. When the quota resources are exhausted, Red Hat Virtualization does not permit further user actions. There are two different kinds of Quota: Table 3.3. The Two Different Kinds of Quota Quota type Definition Run-time Quota This quota limits the consumption of runtime resources, like CPU and memory. Storage Quota This quota limits the amount of storage available. Quota, like SELinux, has three modes: Table 3.4. Quota Modes Quota Mode Function Enforced This mode puts into effect the quota that you have set in Audit mode, limiting resources to the group or user affected by the quota. Audit This mode logs quota violations without blocking users and can be used to test quotas. In Audit mode, you can increase or decrease the amount of runtime quota and the amount of storage quota available to users affected by it. Disabled This mode turns off the runtime and storage limitations defined by the quota. When a user attempts to run a virtual machine, the specifications of the virtual machine are compared to the storage allowance and the runtime allowance set in the applicable quota. If starting a virtual machine causes the aggregated resources of all running virtual machines covered by a quota to exceed the allowance defined in the quota, then the Manager refuses to run the virtual machine. When a user creates a new disk, the requested disk size is added to the aggregated disk usage of all the other disks covered by the applicable quota. If the new disk takes the total aggregated disk usage above the amount allowed by the quota, disk creation fails. Quota allows for resource sharing of the same hardware. It supports hard and soft thresholds. Administrators can use a quota to set thresholds on resources. These thresholds appear, from the user's point of view, as 100% usage of that resource. To prevent failures when the customer unexpectedly exceeds this threshold, the interface supports a "grace" amount by which the threshold can be briefly exceeded. Exceeding the threshold results in a warning sent to the customer. Important Quota imposes limitations upon the running of virtual machines. Ignoring these limitations is likely to result in a situation in which you cannot use your virtual machines and virtual disks. When quota is running in enforced mode, virtual machines and disks that do not have quotas assigned cannot be used. To power on a virtual machine, a quota must be assigned to that virtual machine. To create a snapshot of a virtual machine, the disk associated with the virtual machine must have a quota assigned. When creating a template from a virtual machine, you are prompted to select the quota that you want the template to consume. This allows you to set the template (and all future machines created from the template) to consume a different quota than the virtual machine and disk from which the template is generated. 3.7.2. Shared Quota and Individually Defined Quota Users with SuperUser permissions can create quotas for individual users or quotas for groups. Group quotas can be set for Active Directory users. If a group of ten users are given a quota of 1 TB of storage and one of the ten users fills the entire terabyte, then the entire group will be in excess of the quota and none of the ten users will be able to use any of the storage associated with their group. An individual user's quota is set for only the individual. Once the individual user has used up all of his or her storage or runtime quota, the user will be in excess of the quota and the user will no longer be able to use the storage associated with his or her quota. 3.7.3. Quota Accounting When a quota is assigned to a consumer or a resource, each action by that consumer or on the resource involving storage, vCPU, or memory results in quota consumption or quota release. Since the quota acts as an upper bound that limits the user's access to resources, the quota calculations may differ from the actual current use of the user. The quota is calculated for the max growth potential and not the current usage. Example 3.15. Accounting example A user runs a virtual machine with 1 vCPU and 1024 MB memory. The action consumes 1 vCPU and 1024 MB of the quota assigned to that user. When the virtual machine is stopped 1 vCPU and 1024 MB of RAM are released back to the quota assigned to that user. Run-time quota consumption is accounted for only during the actual run-time of the consumer. A user creates a virtual thin provision disk of 10 GB. The actual disk usage may indicate only 3 GB of that disk are actually in use. The quota consumption, however, would be 10 GB, the max growth potential of that disk. 3.7.4. Enabling and Changing a Quota Mode in a Data Center This procedure enables or changes the quota mode in a data center. You must select a quota mode before you can define quotas. You must be logged in to the Administration Portal to follow the steps of this procedure. Use Audit mode to test your quota to verify that it works as you expect it to. You do not need to have your quota in Audit mode to create or change a quota. Procedure Click Compute Data Centers and select a data center. Click Edit . In the Quota Mode drop-down list, change the quota mode to Enforced . Click OK . If you set the quota mode to Audit during testing, then you must change it to Enforced in order for the quota settings to take effect. 3.7.5. Creating a New Quota Policy You have enabled quota mode, either in Audit or Enforcing mode. You want to define a quota policy to manage resource usage in your data center. Procedure Click Administration Quota . Click Add . Fill in the Name and Description fields. Select a Data Center . In the Memory & CPU section, use the green slider to set Cluster Threshold . In the Memory & CPU section, use the blue slider to set Cluster Grace . Click the All Clusters or the Specific Clusters radio button. If you select Specific Clusters , select the check box of the clusters that you want to add a quota policy to. Click Edit . This opens the Edit Quota window. Under the Memory field, select either the Unlimited radio button (to allow limitless use of Memory resources in the cluster), or select the limit to radio button to set the amount of memory set by this quota. If you select the limit to radio button, input a memory quota in megabytes (MB) in the MB field. Under the CPU field, select either the Unlimited radio button or the limit to radio button to set the amount of CPU set by this quota. If you select the limit to radio button, input a number of vCPUs in the vCpus field. Click OK in the Edit Quota window. In the Storage section, use the green slider to set Storage Threshold . In the Storage section, use the blue slider to set Storage Grace . Click the All Storage Domains or the Specific Storage Domains radio button. If you select Specific Storage Domains , select the check box of the storage domains that you want to add a quota policy to. Click Edit . This opens the Edit Quota window. Under the Storage Quota field, select either the Unlimited radio button (to allow limitless use of Storage) or the limit to radio button to set the amount of storage to which quota will limit users. If you select the limit to radio button, input a storage quota size in gigabytes (GB) in the GB field. Click OK in the Edit Quota window. Click OK in the New Quota window. 3.7.6. Explanation of Quota Threshold Settings Table 3.5. Quota thresholds and grace Setting Definition Cluster Threshold The amount of cluster resources available per data center. Cluster Grace The amount of the cluster available for the data center after exhausting the data center's Cluster Threshold. Storage Threshold The amount of storage resources available per data center. Storage Grace The amount of storage available for the data center after exhausting the data center's Storage Threshold. If a quota is set to 100 GB with 20% Grace, then consumers are blocked from using storage after they use 120 GB of storage. If the same quota has a Threshold set at 70%, then consumers receive a warning when they exceed 70 GB of storage consumption (but they remain able to consume storage until they reach 120 GB of storage consumption.) Both "Threshold" and "Grace" are set relative to the quota. "Threshold" may be thought of as the "soft limit", and exceeding it generates a warning. "Grace" may be thought of as the "hard limit", and exceeding it makes it impossible to consume any more storage resources. 3.7.7. Assigning a Quota to an Object Assigning a Quota to a Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Select the quota you want the virtual machine to consume from the Quota drop-down list. Click OK . Assigning a Quota to a Disk Click Compute Virtual Machines . Click a virtual machine's name. This opens the details view. Click the Disks tab and select the disk you plan to associate with a quota. Click Edit . Select the quota you want the virtual disk to consume from the Quota drop-down list. Click OK . Important Quota must be selected for all objects associated with a virtual machine, in order for that virtual machine to work. If you fail to select a quota for the objects associated with a virtual machine, the virtual machine will not work. The error that the Manager throws in this situation is generic, which makes it difficult to know if the error was thrown because you did not associate a quota with all of the objects associated with the virtual machine. It is not possible to take snapshots of virtual machines that do not have an assigned quota. It is not possible to create templates of virtual machines whose virtual disks do not have assigned quotas. 3.7.8. Using Quota to Limit Resources by User This procedure describes how to use quotas to limit the resources a user has access to. Procedure Click Administration Quota . Click the name of the target quota. This opens the details view. Click the Consumers tab. Click Add . In the Search field, type the name of the user you want to associate with the quota. Click GO . Select the check box to the user's name. Click OK . After a short time, the user will appear in the Consumers tab in the details view. 3.7.9. Editing Quotas This procedure describes how to change existing quotas. Procedure Click Administration Quota and select a quota. Click Edit . Edit the fields as required. Click OK . 3.7.10. Removing Quotas This procedure describes how to remove quotas. Procedure Click Administration Quota and select a quota. Click Remove . Click OK . 3.7.11. Service Level Agreement Policy Enforcement This procedure describes how to set service level agreement CPU features. Procedure Click Compute Virtual Machines . Click New , or select a virtual machine and click Edit . Click the Resource Allocation tab. Specify CPU Shares . Possible options are Low , Medium , High , Custom , and Disabled . Virtual machines set to High receive twice as many shares as Medium , and virtual machines set to Medium receive twice as many shares as virtual machines set to Low . Disabled instructs VDSM to use an older algorithm for determining share dispensation; usually the number of shares dispensed under these conditions is 1020. The CPU consumption of users is now governed by the policy you have set. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-quotas_and_service_level_agreement_policy |
Chapter 7. About PicketLink Login Modules | Chapter 7. About PicketLink Login Modules A PicketLink login module is typically configured as part of the security setup to use a Security Token Service (STS) or browser-based SSO with SAML for authenticating users. The STS may be collocated on the same container as the login module or be accessed remotely through web service calls or another technology. PicketLink STS login modules support non-PicketLink STS implementations through standard WS-Trust calls. For more details on the concepts behind Security Token Services as well as browser-based SSO with SAML, please see the JBoss EAP Security Architecture guide. 7.1. STSIssuingLoginModule Full name : org.picketlink.identity.federation.core.wstrust.auth.STSIssuingLoginModule The STSIssuingLoginModule uses a user name and password to authenticate the user against an STS by retrieving a token. The authentication happens as follows: Calls the configured STS and requests for a security token. Upon successfully receiving the RequestedSecurityToken , it marks the authentication as successful. A call to the STS typically requires authentication. This login module uses credentials from one of the following sources: Its properties file, if the useOptionsCredentials module option is set to true . login module credentials if the password-stacking module option is set to useFirstPass . From the configured CallbackHandler by supplying a Name and Password Callback. Upon successful authentication, the security token is stored in the login module's shared map with org.picketlink.identity.federation.core.wstrust.lm.stsToken key. Note This login module has no direct configurable attributes, but you may use module options to pass in configuration options. Example STSIssuingLoginModule <security-domain name="saml-issue-token"> <authentication> <login-module code="org.picketlink.identity.federation.core.wstrust.auth.STSIssuingLoginModule" flag="required"> <module-option name="configFile">./picketlink-sts-client.properties</module-option> <module-option name="endpointURI">http://security_saml/endpoint</module-option> </login-module> </authentication> <mapping> <mapping-module code="org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSPrincipalMappingProvider" type="principal"/> <mapping-module code="org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSGroupMappingProvider" type="role" /> </mapping> </security-domain> In the above example, the specified Principal mapping provider and the RoleGroup mapping provider results in an authenticated Subject being populated that enables coarse-grained and role-based authorization. After authentication, the Security Token is available and may be used to invoke other services by Single Sign-On. 7.2. STSValidatingLoginModule Full name : org.picketlink.identity.federation.core.wstrust.auth.STSValidatingLoginModule The STSValidatingLoginModule uses a TokenCallback to retrieve a security token from STS. The authentication happens as follows: Calls the configured STS and validates an available security token. A call to STS typically requires authentication. This Login Module uses credentials from one of the following sources: Its properties file, if the useOptionsCredentials module option is set to true . login module credentials if the password-stacking module option is set to useFirstPass . From the configured CallbackHandler by supplying a Name and Password Callback. Upon successful authentication, the security token is stored in the login module's shared map with org.picketlink.identity.federation.core.wstrust.lm.stsToken key. Note This login module has no direct configurable attributes, but you may use module options to pass in configuration options. Example STSValidatingLoginModule <security-domain name="saml-validate-token"> <authentication> <login-module code="org.picketlink.identity.federation.core.wstrust.auth.STSValidatingLoginModule" flag="required"> <module-option name="configFile">./picketlink-sts-client.properties</module-option> <module-option name="endpointURI">http://security_saml/endpoint</module-option> </login-module> </authentication> <mapping> <mapping-module code="org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSPrincipalMappingProvider" type="principal"/> <mapping-module code="org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSGroupMappingProvider" type="role"/> </mapping> </security-domain> The above example shows how to enable validation for an issued token, either directly by contacting the STS or through a token-issuing login module, to be used to authenticate against multiple applications and services. Providing a Principal mapping provider and a RoleGroup mapping provider results in an authenticated Subject being populated that enables coarse-grained and role-based authorization. After authentication, the Security Token is available and can be used to invoke other services by Single Sign-On. 7.3. SAML2STSLoginModule Full name : org.picketlink.identity.federation.bindings.jboss.auth.SAML2STSLoginModule The authentication happens as follows: This Login Module supplies an ObjectCallback to the configured CallbackHandler and expects a SamlCredential object back. The Assertion is validated against the configured STS. Upon successful authentication, the SamlCredential is inspected for a NameIDType . If a user ID and SAML token are shared, this Login Module bypasses validation when stacked on top of another Login Module that is successfully authenticated. Example SAML2STSLoginModule <security-domain name="saml-sts" cache-type="default"> <authentication> <login-module code="org.picketlink.identity.federation.bindings.jboss.auth.SAML2STSLoginModule" flag="required" module="org.picketlink"> <module-option name="configFile" value="USD{jboss.server.config.dir}/sts-config.properties"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> Note This login module has no direct configurable attributes, but you may use module options to pass in configuration options. 7.4. SAML2LoginModule Full name : org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule The authentication happens as follows: This login module is used in conjunction with other components for SAML authentication and performs no authentication itself. The SAML authenticator, which is installed by the PicketLink Service Provider Undertow ServletExtension ( org.picketlink.identity.federation.bindings.wildfly.sp.SPServletExtension ), uses this login module to authenticate users based on a SAML assertion previously issued by an identity provider. If the user does not have a SAML assertion for the service provider, the user is redirected to the identity provider to obtain a SAML assertion. This login module is used to pass the user ID and roles to the security framework to be populated in the JAAS subject. Example SAML2LoginModule <security-domain name="sp" cache-type="default"> <authentication> <login-module code="org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule" flag="required"/> </authentication> </security-domain> Note This login module has no direct configurable attributes. Warning The SAML2LoginModule is intended for use with applications using PicketLink with SAML and should not be used without the PicketLink Service Provider Undertow ServletExtension ( org.picketlink.identity.federation.bindings.wildfly.sp.SPServletExtension ). Doing so presents a possible security risk since the SAML2LoginModule or SAML2CommonLoginModule will always accept the default password of EMPTY_STR . For example, this can also occur if the PicketLink Service Provider Undertow ServletExtension is not installed in the SP application. The PicketLink Service Provider Undertow ServletExtension is installed automatically when configuring the SP application for JBoss EAP . This can also occur if the SAML2LoginModule is stacked with other login modules: <security-domain name="sp" cache-type="default"> <authentication> <login-module code="org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="users.properties"/> <module-option name="rolesProperties" value="roles.properties"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> 7.5. RegExUserNameLoginModule Full name : org.picketlink.identity.federation.bindings.jboss.auth.RegExUserNameLoginModule This login module can be used after any Certificate Login Module to extract a username, UID or other field from the principal name so that roles can be obtained from LDAP. The module has an option named regex which specifies the regular expression to be applied to the principal name, the result of which is passed on to the subsequent login module. Example RegExUserNameLoginModule <login-module code="org.picketlink.identity.federation.bindings.jboss.auth.RegExUserNameLoginModule" flag="required"> <module-option name="password-stacking" value="useFirstPass"/> <module-option name="regex" value="UID=(.*?),"/> </login-module> For example, an input principal name of UID=007, EMAILADDRESS=something@something, CN=James Bond, O=SpyAgency would result in the output 007 using the above login module. For more information on regular expressions, see the java.util.regex.Pattern class documentation . | [
"<security-domain name=\"saml-issue-token\"> <authentication> <login-module code=\"org.picketlink.identity.federation.core.wstrust.auth.STSIssuingLoginModule\" flag=\"required\"> <module-option name=\"configFile\">./picketlink-sts-client.properties</module-option> <module-option name=\"endpointURI\">http://security_saml/endpoint</module-option> </login-module> </authentication> <mapping> <mapping-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSPrincipalMappingProvider\" type=\"principal\"/> <mapping-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSGroupMappingProvider\" type=\"role\" /> </mapping> </security-domain>",
"<security-domain name=\"saml-validate-token\"> <authentication> <login-module code=\"org.picketlink.identity.federation.core.wstrust.auth.STSValidatingLoginModule\" flag=\"required\"> <module-option name=\"configFile\">./picketlink-sts-client.properties</module-option> <module-option name=\"endpointURI\">http://security_saml/endpoint</module-option> </login-module> </authentication> <mapping> <mapping-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSPrincipalMappingProvider\" type=\"principal\"/> <mapping-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.mapping.STSGroupMappingProvider\" type=\"role\"/> </mapping> </security-domain>",
"<security-domain name=\"saml-sts\" cache-type=\"default\"> <authentication> <login-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.SAML2STSLoginModule\" flag=\"required\" module=\"org.picketlink\"> <module-option name=\"configFile\" value=\"USD{jboss.server.config.dir}/sts-config.properties\"/> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> </login-module> </authentication> </security-domain>",
"<security-domain name=\"sp\" cache-type=\"default\"> <authentication> <login-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule\" flag=\"required\"/> </authentication> </security-domain>",
"<security-domain name=\"sp\" cache-type=\"default\"> <authentication> <login-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule\" flag=\"optional\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> </login-module> <login-module code=\"UsersRoles\" flag=\"required\"> <module-option name=\"usersProperties\" value=\"users.properties\"/> <module-option name=\"rolesProperties\" value=\"roles.properties\"/> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> </login-module> </authentication> </security-domain>",
"<login-module code=\"org.picketlink.identity.federation.bindings.jboss.auth.RegExUserNameLoginModule\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"regex\" value=\"UID=(.*?),\"/> </login-module>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/picketlink_login_modules |
Service Mesh | Service Mesh OpenShift Container Platform 4.15 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: ENABLE_NATIVE_SIDECARS: \"true\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"false\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark",
"spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus",
"spec: techPreview: gatewayAPI: enabled: true",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: techPreview: global: pathNormalization: <option>",
"oc create -f <myEnvoyFilterFile>",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true",
"label namespace istio-system istio-discovery=enabled",
"2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller \"gateway.networking.k8s.io/v1alpha2/TCPRoute\" is syncing",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1\" | kubectl apply -f -; }",
"apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0",
"api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020",
"spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: \"true\"",
"error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory",
"oc label namespace istio-system maistra.io/ignore-namespace-",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.6",
"oc project istio-system",
"oc get smcp -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6",
"oc get smcp -o yaml",
"oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml",
"oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'",
"oc edit smcp.v1.maistra.io <smcp_name>",
"oc project istio-system",
"oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml",
"oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml",
"oc new-project istio-system-upgrade",
"oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml",
"spec: policy: type: Mixer",
"spec: telemetry: type: Mixer",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN",
"#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check",
"spec: tracing: sampling: 100 # 1% type: Jaeger",
"spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"",
"spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install",
"oc rollout restart <deployment>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"oc -n istio-system edit smcp <name> 1",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"oc edit deployment -n <namespace> <deploymentName>",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin",
"oc -n openshift-operators get subscriptions",
"oc -n openshift-operators edit subscription <name> 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/servicemeshoperator.openshift-operators: \"\" name: servicemeshoperator namespace: openshift-operators spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n openshift-operators get po -l name=istio-operator -owide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 tracing: type: None sampling: 10000 addons: kiali: enabled: true name: kiali grafana: enabled: true",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.6.6 66m",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system get pods -owide",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic",
"oc apply -f <file-name>",
"oc get smm default -n my-application",
"NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s",
"oc describe smmr default -n istio-system",
"Name: default Namespace: istio-system Labels: <none> Status: Configured Members: default my-application Members: default my-application",
"oc edit smmr default -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic",
"oc policy add-role-to-user",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.6 security: dataPlane: mtls: true",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT",
"oc create -n <namespace> -f <policy.yaml>",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"oc create -n <namespace> -f <destination-rule.yaml>",
"kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]",
"oc create -n istio-system -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]",
"oc edit smcp <smcp-name>",
"spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts",
"oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'",
"oc -n bookinfo delete pods --all",
"pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted",
"oc get pods -n bookinfo",
"sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca",
"oc apply -f cluster-issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca",
"oc apply -n istio-system -f istio-ca.yaml",
"helm install istio-csr jetstack/cert-manager-istio-csr -n istio-system -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml",
"replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: \"\" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: \"maistra.io/member-of=istio-system\" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: [\"basic\"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc",
"oc apply -f mesh.yaml -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep",
"oc new-project <namespace>",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml",
"oc exec \"USD(oc get pod -l app=sleep -n <namespace> -o jsonpath={.items..metadata.name})\" -c sleep -n <namespace> -- curl http://httpbin.<namespace>:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"",
"200",
"oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml",
"INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')",
"curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w \"%{http_code}\" -s",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n istio-system get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false",
"apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"",
"oc apply -f sidecar.yaml",
"oc get sidecar",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-system 1 spec: selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: 2 app: istio-ingressgateway istio: ingressgateway sidecar.istio.io/inject: \"true\" spec: containers: - name: istio-proxy image: auto serviceAccountName: istio-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: istio-system rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-secret-reader namespace: istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy 3 metadata: name: gatewayingress namespace: istio-system spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"oc scale -n istio-system deployment/<new_gateway_deployment> --replicas <new_number_of_replicas>",
"oc scale -n istio-system deployment/<old_gateway_deployment> --replicas <new_number_of_replicas>",
"oc label service -n istio-system istio-ingressgateway app.kubernetes.io/managed-by-",
"oc patch service -n istio-system istio-ingressgateway --type='json' -p='[{\"op\": \"remove\", \"path\": \"/metadata/ownerReferences\"}]'",
"oc patch smcp -n istio-system <smcp_name> --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/gateways/ingress/enabled\", \"value\": false}]'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: false",
"kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-gateway namespace: istio-system 1 spec: host: www.example.com to: kind: Service name: istio-ingressgateway 2 weight: 100 port: targetPort: http2 wildcardPolicy: None",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc get routes",
"NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: \"tempo-sample-distributor.tracing-system.svc.cluster.local:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp]",
"oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector",
"kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6",
"spec: tracing: type: None",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100",
"apiVersion: kiali.io/v1alpha1 kind: Kiali spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: otel-disable-tls spec: host: \"otel-collector.bookinfo.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: \"*.tracing-system-mtls.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE",
"spec: addons: jaeger: name: distr-tracing-production",
"spec: tracing: sampling: 100",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: prometheus: auth: type: bearer use_kiali_token: true query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: \"istio-proxy\" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: \"90m\" memory: \"245Mi\" requests: cpu: \"30m\" memory: \"108Mi\" global.oauthproxy: container: resources: requests: cpu: \"101m\" memory: \"256Mi\" limits: cpu: \"201m\" memory: \"512Mi\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"oc get smcp basic -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.6 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local",
"spec: cluster: name:",
"spec: cluster: network:",
"spec: gateways: additionalEgress: <egress_name>:",
"spec: gateways: additionalEgress: <egress_name>: enabled:",
"spec: gateways: additionalEgress: <egress_name>: requestedNetworkView:",
"spec: gateways: additionalEgress: <egress_name>: service: metadata: labels: federation.maistra.io/egress-for:",
"spec: gateways: additionalEgress: <egress_name>: service: ports:",
"spec: gateways: additionalIngress:",
"spec: gateways: additionalIgress: <ingress_name>: enabled:",
"spec: gateways: additionalIngress: <ingress_name>: service: type:",
"spec: gateways: additionalIngress: <ingress_name>: service: type:",
"spec: gateways: additionalIngress: <ingress_name>: service: metadata: labels: federation.maistra.io/ingress-for:",
"spec: gateways: additionalIngress: <ingress_name>: service: ports:",
"spec: gateways: additionalIngress: <ingress_name>: service: ports: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: gateways: additionalIngress: ingress-green-mesh: enabled: true service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery",
"kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local",
"spec: security: trust: domain:",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"oc edit -n red-mesh-system smcp red-mesh",
"oc get smcp -n red-mesh-system",
"NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"metadata: name:",
"metadata: namespace:",
"spec: remote: addresses:",
"spec: remote: discoveryPort:",
"spec: remote: servicePort:",
"spec: gateways: ingress: name:",
"spec: gateways: egress: name:",
"spec: security: trustDomain:",
"spec: security: clientID:",
"spec: security: certificateChain: kind: ConfigMap name:",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"oc create -n red-mesh-system -f servicemeshpeer.yaml",
"oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml",
"status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo",
"metadata: name:",
"metadata: namespace:",
"spec: exportRules: - type:",
"spec: exportRules: - type: NameSelector nameSelector: namespace: name:",
"spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews",
"oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>",
"oc create -n red-mesh-system -f export-to-green-mesh.yaml",
"oc get exportedserviceset <PeerMeshExportedTo> -o yaml",
"oc -n red-mesh-system get exportedserviceset green-mesh -o yaml",
"status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings",
"metadata: name:",
"metadata: namespace:",
"spec: importRules: - type:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name:",
"spec: importRules: - type: NameSelector importAsLocal:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project green-mesh-system",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings",
"oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>",
"oc create -n green-mesh-system -f import-from-red-mesh.yaml",
"oc get importedserviceset <PeerMeshImportedInto> -o yaml",
"oc -n green-mesh-system get importedserviceset/red-mesh -o yaml",
"status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>",
"oc edit -n green-mesh-system -f import-from-red-mesh.yaml",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m",
"oc create -n <application namespace> -f <DestinationRule.yaml>",
"oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"oc apply -f plugin.yaml",
"schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100",
"oc apply -f <extension>.yaml",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value",
"cat <<EOM | oc apply -f - apiVersion: kiali.io/v1alpha1 kind: OSSMConsole metadata: namespace: openshift-operators name: ossmconsole EOM",
"delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace>",
"for r in USD(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/ */:/g'); do oc delete ossmconsoles -n USD(echo USDr|cut -d: -f1) USD(echo USDr|cut -d: -f2); done",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100",
"oc apply -f threescale-wasm-auth-bookinfo.yaml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net",
"oc apply -f service-entry-threescale-saas-backend.yml",
"oc apply -f destination-rule-threescale-saas-backend.yml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net",
"oc apply -f service-entry-threescale-saas-system.yml",
"oc apply -f <destination-rule-threescale-saas-system.yml>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300",
"apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 ,,,",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n istio-system",
"oc logs istio-system",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s",
"oc logs -n openshift-operators <podName>",
"oc logs -n openshift-operators istio-operator-bb49787db-zgr87",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s",
"NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h",
"oc describe smcp <smcp-name> -n <controlplane-namespace>",
"oc describe smcp basic -n istio-system",
"oc get jaeger -n istio-system",
"NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m",
"oc get kiali -n istio-system",
"NAME AGE kiali 15m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit smcp <smcp_name>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true",
"logging:",
"logging: componentLevels:",
"logging: logAsJSON:",
"validationMessages:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger",
"tracing: sampling:",
"tracing: type:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali",
"spec: addons: kiali: name:",
"kiali: enabled:",
"kiali: install:",
"kiali: install: dashboard:",
"kiali: install: dashboard: viewOnly:",
"kiali: install: dashboard: enableGrafana:",
"kiali: install: dashboard: enablePrometheus:",
"kiali: install: dashboard: enableTracing:",
"kiali: install: service:",
"kiali: install: service: metadata:",
"kiali: install: service: metadata: annotations:",
"kiali: install: service: metadata: labels:",
"kiali: install: service: ingress:",
"kiali: install: service: ingress: metadata: annotations:",
"kiali: install: service: ingress: metadata: labels:",
"kiali: install: service: ingress: enabled:",
"kiali: install: service: ingress: contextPath:",
"install: service: ingress: hosts:",
"install: service: ingress: tls:",
"kiali: install: service: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc login https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit -n openshift-distributed-tracing -f jaeger.yaml",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc get pods -n openshift-distributed-tracing",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc -n openshift-operators delete ds -lmaistra-version",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni clusterrole/ossm-cni clusterrolebinding/ossm-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete cm -n openshift-operators -lmaistra-version",
"oc delete sa -n openshift-operators -lmaistra-version",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: global: pathNormalization: <option>",
"{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }",
"oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap",
"oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings",
"oc get jaeger -n istio-system",
"NAME AGE jaeger 3d21h",
"oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml",
"oc delete jaeger jaeger -n istio-system",
"oc create -f /tmp/jaeger-cr.yaml -n istio-system",
"rm /tmp/jaeger-cr.yaml",
"oc delete -f <jaeger-cr-file>",
"oc delete -f jaeger-prod-elasticsearch.yaml",
"oc create -f <jaeger-cr-file>",
"oc get pods -n jaeger-system -w",
"spec: version: v1.1",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project istio-system",
"oc create -n istio-system -f istio-installation.yaml",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true",
"apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}",
"apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false",
"oc delete secret istio.default",
"RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem",
"/tmp/pod-cert-chain-workload.pem: OK",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n <control_plane_namespace> get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators",
"oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'",
"maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded",
"oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0",
"deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks",
"oc edit cm -n istio-system istio",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"curl \"http://USDGATEWAY_URL/productpage\"",
"export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')",
"echo USDJAEGER_URL",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one",
"istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret",
"gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1",
"mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:",
"spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true",
"enabled",
"dashboard viewOnlyMode",
"ingress enabled",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one",
"tracing: enabled:",
"jaeger: template:",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"oc get route -n istio-system external-jaeger",
"NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n istio-system",
"oc logs istio-system",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete -n openshift-operators daemonset/istio-node",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete svc admission-controller -n <operator-project>",
"oc delete project <istio-system-project>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/service_mesh/index |
Chapter 5. Limited Availability features | Chapter 5. Limited Availability features Important This section describes Limited Availability features in Red Hat OpenShift AI 2.18. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. This applies to all features described in this section. Tuning in OpenShift AI Tuning in OpenShift AI is available as a Limited Availability feature. The Kubeflow Training Operator and the Hugging Face Supervised Fine-tuning Trainer (SFT Trainer) enable users to fine-tune and train their models easily in a distributed environment. In this release, you can use this feature for models that are based on the PyTorch machine-learning framework. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/release_notes/limited-availability-features_relnotes |
Preface | Preface Red Hat Quay container image registries serve as centralized hubs for storing container images. Users of Red Hat Quay can create repositories to effectively manage images and grant specific read (pull) and write (push) permissions to the repositories as deemed necessary. Administrative privileges expand these capabilities, allowing users to perform a broader set of tasks, like the ability to add users and control default settings. This guide offers an overview of Red Hat Quay's users and organizations, its tenancy model, and basic operations like creating and deleting users, organizations, and repositories, handling access, and interacting with tags. It includes both UI and API operations. Note The following API endpoints are linked to their associated entry in the Red Hat Quay API guide . The Red Hat Quay API guide provides more information about each endpoint, such as response codes and optional query parameters. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/use_red_hat_quay/pr01 |
Chapter 1. About JBoss EAP on Microsoft Azure | Chapter 1. About JBoss EAP on Microsoft Azure JBoss EAP 8.0 can be used with the Microsoft Azure platform, as long as you use it within the specific supported configurations for running JBoss EAP in Azure. If you are configuring a clustered JBoss EAP environment, you must apply the specific configurations necessary to use JBoss EAP clustering features in Azure. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_red_hat_jboss_enterprise_application_platform_in_microsoft_azure/about-server-on-microsoft-azure_default |
Web console | Web console Red Hat Advanced Cluster Management for Kubernetes 2.11 Console | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/web_console/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Provide as much detail as possible so that your request can be addressed. Prerequisites You have a Red Hat account. You are logged in to your Red Hat account. Procedure To provide your feedback, click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide more details about the issue or enhancement in the Description text box. If your Red Hat user name does not automatically appear in the Reporter text box, enter it. Scroll to the bottom of the page and then click the Create button. A documentation issue is created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/proc-providing-feedback-on-redhat-documentation |
5.115. ipvsadm | 5.115. ipvsadm 5.115.1. RHBA-2012:0865 - ipvsadm bug fix update Updated ipvsadm packages that fix one bug is now available for Red Hat Enterprise Linux 6. The ipvsadm package provides the ipsvadm tool to administer the IP Virtual Server services offered by the Linux kernel. Bug Fix BZ# 788529 Prior to this update, the ipvsadm utility did not correctly handle out-of-order messages from the kernel concerning the sync daemon. As a consequence, the "ipvsadm --list --daemon" command did not always output the status of the sync daemon. With this update, the ordering of messages from the kernel no longer influences the output, and the command always returns the sync daemon status. All users of ipvsadm are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/ipvsadm |
5.322. system-config-keyboard | 5.322. system-config-keyboard 5.322.1. RHEA-2012:0852 - system-config-keyboard enhancement update Updated system-config-keyboard packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The system-config-keyboard packages provide a graphical user interface that allows the user to change the default keyboard of the system. Enhancement BZ# 771389 Prior to this update, the Red Hat Enterprise Virtualization Hypervisor pulled too many dependencies from the system-config-keyboard package to support keyboard selection capability for non-US keyboards. This update adds the system-config-keyboard-base package that contains the core python libraries. All users of system-config-keyboard are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/system-config-keyboard |
Chapter 10. availability | Chapter 10. availability This chapter describes the commands under the availability command. 10.1. availability zone list List availability zones and their status Usage: Table 10.1. Command arguments Value Summary -h, --help Show this help message and exit --compute List compute availability zones --network List network availability zones --volume List volume availability zones --long List additional fields in output Table 10.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack availability zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--compute] [--network] [--volume] [--long]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/availability |
6.11. Cluster Resources Cleanup | 6.11. Cluster Resources Cleanup If a resource has failed, a failure message appears when you display the cluster status. If you resolve that resource, you can clear that failure status with the pcs resource cleanup command. This command resets the resource status and failcount , telling the cluster to forget the operation history of a resource and re-detect its current state. The following command cleans up the resource specified by resource_id . If you do not specify a resource_id , this command resets the resource status and failcount for all resources. As of Red Hat Enterprise Linux 7.5, the pcs resource cleanup command probes only the resources that display as a failed action. To probe all resources on all nodes you can enter the following command: By default, the pcs resource refresh command probes only the nodes where a resource's state is known. To probe all resources even if the state is not known, enter the following command: | [
"pcs resource cleanup resource_id",
"pcs resource refresh",
"pcs resource refresh --full"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resource_cleanup-haar |
Chapter 42. Standalone perspectives in Business Central | Chapter 42. Standalone perspectives in Business Central Business Central provides specialized editors for authoring assets based on the asset's format. Business Central has a feature that enables you to use these editors individually. This feature is known as the standalone perspective mode of the editor or simply the standalone perspectives . As a business rules developer, you can embed a standalone perspective in your web application and then use it to edit rules, processes, decision tables, and other assets. After embedding a perspective you can edit an asset in your own application without switching to Business Central. You can use this feature to customize your web application. In addition to standalone perspectives you can also embed standalone custom pages (dashboards) in your applications. You can access a standalone perspective by using a specific web address in a browser with the standalone and perspective parameters. A standalone perspective's web address may also contain additional parameters. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/using-standalone-perspectives-intro-con |
Chapter 30. Installation and Booting | Chapter 30. Installation and Booting The installer no longer crashes when you select an incomplete IMSM RAID array during manual partitioning Previously, if the system being installed had a storage drive which was previously part of an Intel Matrix (IMSI) RAID array which was broken at the time of the installation, the disk was displayed as Unknown in the Installation Destination screen in the graphical installer. If you attempted to select this drive as an installation target, the installer crashed with the An unknown error has occured message. This update adds proper handling for such drives, and allows you to use them as standard installation targets. (BZ#1465944) Installer now accepts additional time zone definitions in Kickstart files Starting with Red Hat Enterprise Linux 7.0, Anaconda switched to a different, more restrictive method of validating time zone selections. This caused some time zone definitions, such as Japan , to be no longer valid despite being acceptable in versions, and legacy Kickstart files with these definitions had to be updated or they would default to the Americas/New_York time zone. The list of valid time zones was previously taken from pytz.common_timezones in the pytz Python library. This update changes the validation settings for the timezone Kickstart command to use pytz.all_timezones , which is a superset of the common_timezones list and which allows significantly more time zones to be specified. This change ensures that old Kickstart files made for Red Hat Enterprise Linux 6 still specify valid time zones. Note that this change only applies to the timezone Kickstart command. The time zone selection in the graphical and text-based interactive interfaces remains unchanged. Existing Kickstart files for Red Hat Enterprise Linux 7 that had valid time zone selections do not require any updates. (BZ# 1452873 ) Proxy configuration set up using a boot option now works correctly in Anaconda Previously, proxy configuration made in the boot menu command line using the proxy= option was not correctly applied when probing remote package repositories. This was caused by an attempt to avoid a refresh of the Installation Source screen if network settings were changed. This update improves the installer logic so that proxy configuration now applies at all times but still avoids blocking the user interface on settings changes. (BZ#1478970) FIPS mode now supports loading files over HTTPS during installation Previously, installation images did not support FIPS mode ( fips=1 ) during installation where a Kickstart file is being loaded from an HTTPS source ( inst.ks=https://<location>/ks.cfg ). This release implements support for this previously missing functionality, and loading files over HTTPS in FIPS mode works as expected. (BZ# 1341280 ) Network scripts now correctly update /etc/resolv.conf Network scripts have been enhanced to update the /etc/resolv.conf file correctly. Notably: The scripts now update the nameserver and search entries in the /etc/resolv.conf file after the DNS* and DOMAIN options, respectively, have been updated in the ifcfg-* files in the /etc/sysconfig/network-scripts/ directory The scripts now also update the order of nameserver entries after it has been updated in the ifcfg-* files in /etc/sysconfig/network-scripts/ Support for the DNS3 option has been added The scripts now correctly process duplicate and randomly omitted DNS* options (BZ# 1364895 ) Files with the .old extension are now ignored by network scripts Network scripts in Red Hat Enterprise Linux contain a regular expression which causes them to ignore ifcfg-* configuration files with certain extensions, such as .bak , .rpmnew or .rpmold . However, the .old extension was missing from this set, despite being used in documentation and in common practice. This update adds the .old extension into the list, which ensures that script files which use it will be ignored by network scripts as expected. (BZ# 1455419 ) Bridge devices no longer fail to obtain an IP address Previously, bridge devices sometimes failed to obtain an IP address from the DHCP server immediately after system startup. This was caused by a race condition where the ifup-eth script did not wait for the Spanning Tree Protocol (STP) to complete its startup. This bug has been fixed by adding a delay that causes ifup-eth to wait long enough for STP to finish starting. (BZ# 1380496 ) The rhel-dmesg service can now be disabled correctly Previously, even if the rhel-dmesg.service was explicitly disabled using systemd , it continued to run anyway. This bug has been fixed, and the service can now be disabled correctly. (BZ# 1395391 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_installation_and_booting |
17.3. Network Address Translation | 17.3. Network Address Translation By default, virtual network switches operate in NAT mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected guests to use the host physical machine IP address for communication to any external network. By default, computers that are placed externally to the host physical machine cannot communicate to the guests inside when the virtual network switch is operating in NAT mode, as shown in the following diagram: Figure 17.3. Virtual network switch using NAT with two guests Warning Virtual network switches use NAT configured by iptables rules. Editing these rules while the switch is running is not recommended, as incorrect rules may result in the switch being unable to communicate. If the switch is not running, you can set the public IP range for forward mode NAT in order to create a port masquerading range by running: | [
"iptables -j SNAT --to-source [start]-[end]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-network_address_translation |
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation | Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . | [
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp |
9.2. Using Add/Remove Software | 9.2. Using Add/Remove Software To find and install a new package, on the GNOME panel click on System Administration Add/Remove Software , or run the gpk-application command at the shell prompt. Figure 9.4. PackageKit's Add/Remove Software window 9.2.1. Refreshing Software Sources (Yum Repositories) PackageKit refers to Yum repositories as software sources. It obtains all packages from enabled software sources. You can view the list of all configured and unfiltered (see below) Yum repositories by opening Add/Remove Software and clicking System Software sources . The Software Sources dialog shows the repository name, as written on the name= <My Repository Name> field of all [ repository ] sections in the /etc/yum.conf configuration file, and in all repository .repo files in the /etc/yum.repos.d/ directory. Entries which are checked in the Enabled column indicate that the corresponding repository will be used to locate packages to satisfy all update and installation requests (including dependency resolution). You can enable or disable any of the listed Yum repositories by selecting or clearing the check box. Note that doing so causes PolicyKit to prompt you for superuser authentication. The Enabled column corresponds to the enabled= <1 or 0> field in [ repository ] sections. When you click the check box, PackageKit inserts the enabled= <1 or 0> line into the correct [ repository ] section if it does not exist, or changes the value if it does. This means that enabling or disabling a repository through the Software Sources window causes that change to persist after closing the window or rebooting the system. Note that it is not possible to add or remove Yum repositories through PackageKit. Note Checking the box at the bottom of the Software Sources window causes PackageKit to display source RPM, testing and debuginfo repositories as well. This box is unchecked by default. After making a change to the available Yum repositories, click on System Refresh package lists to make sure your package list is up-to-date. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Using_Add_Remove_Software |
8.3. Create a New Relational View Model | 8.3. Create a New Relational View Model 8.3.1. Create a New Relational View Model To create a new empty relational view model: Launch the New Model Wizard . Specify a unique model name. Select Relational option from Model Class drop-down menu. Select View Model from Model Type drop-down menu. Click Finish . Note You can change the target location (i.e. project or folder) by selecting the Browse... button and selecting a project or folder within your workspace. In addition to creating a new empty relational view model, the following builder options are available: Copy from existing model of the same model class. Transform from existing model. 8.3.2. Copy an Existing Relational View Model This builder option performs a structural copy of the contents of an existing model to a newly defined model. You can choose a full copy or select individual model components for copy. To create a new relational model by copying contents from another relational view model, complete first 4 steps from the Create a New Relational View Model section and continue with these additional steps: Select the model builder labeled Copy from existing model of the same model class and click > . The Copy Existing Model dialog will be displayed. Select an existing relational model from the workspace using the browse button. Select the Copy all descriptions option if desired. Click Finish . 8.3.3. Transform from an Existing Relational View Model This option is only applicable for creating a relational view model from a relational source model with the added feature of creating default transformations ( SELECT * FROM SourceModel.Table_X ) for each source table. The steps are the same as for the Copy from Relational View Model section described above. There is an additional option in the second dialog window of the wizard which can automatically set the relational table's supports update property to false. If this is not selected, the default value will be true. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-create_a_new_relational_view_model |
Chapter 7. Image scanning by using the roxctl CLI | Chapter 7. Image scanning by using the roxctl CLI You can scan images stored in image registries, including cluster local registries such as the OpenShift Container Platform integrated image registry by using the roxctl CLI. 7.1. Scanning images by using a remote cluster By specifying the appropriate cluster in the delegated scanning configuration or through the cluster parameter described in the following procedure, you can scan images from cluster local registries by using a remote cluster. Important For more information about how to configure delegated image scanning, see Configuring delegated image scanning . Procedure Run the following command to scan the specified image in a remote cluster: USD roxctl image scan \ --image= <image_registry> / <image_name> \ 1 --cluster= <cluster_detail> \ 2 [flags] 3 1 For <image_registry> , specify the registry where the image is located, for example, image-registry.openshift-image-registry.svc:5000/ . For <image_name> , specify the name of the image you want to scan, for example, default/image-stream:latest . 2 For <cluster_detail> , specify the name or ID of the remote cluster. For example, specify the name remote . 3 Optional: For [flags] , you can specify parameters to modify the behavior of the command. For more information about optional parameters, see roxctl image scan command options . Example output { "Id": "sha256:3f439d7d71adb0a0c8e05257c091236ab00c6343bc44388d091450ff58664bf9", 1 "name": { 2 "registry": "image-registry.openshift-image-registry.svc:5000", 3 "remote": "default/image-stream", 4 "tag": "latest", 5 "fullName": "image-registry.openshift-image-registry.svc:5000/default/image-stream:latest" 6 }, [...] 1 A unique identifier for the image that serves as a fingerprint for the image. It helps ensure the integrity and authenticity of the image. 2 Contains specific details about the image. 3 The location of the image registry where the image is stored. 4 The remote path to the image. 5 The version or tag associated with this image. 6 The complete name of the image, combining the registry, remote path, and tag. 7.2. roxctl image scan command options The roxctl image scan command supports the following options: Option Description --cluster string Delegate image scanning to a specific cluster. --compact-output Print the JSON output in a compact format. The default value is false . -f , --force Ignore Central's cache for the scan and force a fresh re-pull from Scanner. The default value is false . --headers strings Print the headers in a tabular format. Default values include COMPONENT , VERSION , CVE , SEVERITY , and LINK . --headers-as-comments Print the headers as comments in a CSV tabular output. The default value is false . -h , --help View the help text for the roxctl image scan command. -i , --image string Specify the image name and reference you want to scan. -a , --include-snoozed Return both snoozed and unsnoozed common vulnerabilities and exposures (CVEs). The default value is false . --merge-output Merge duplicate cells in a tabular output. The default value is true . --no-header Do not print headers for tabular format. The default value is false . -o , --output string Specify the output format. You can select a format to customize the display of results. Formats include table , CSV , JSON , and SARIF . -r , --retries int Set the number of retries before the operation is aborted with an error. The default value is 3 . -d , --retry-delay int Set the time in seconds to wait between retries. The default value is 3 . --row-jsonpath-expressions string Use the JSON path expressions to create rows from the JSON object. For more details, run the roxctl image scan --help command. | [
"roxctl image scan --image= <image_registry> / <image_name> \\ 1 --cluster= <cluster_detail> \\ 2 [flags] 3",
"{ \"Id\": \"sha256:3f439d7d71adb0a0c8e05257c091236ab00c6343bc44388d091450ff58664bf9\", 1 \"name\": { 2 \"registry\": \"image-registry.openshift-image-registry.svc:5000\", 3 \"remote\": \"default/image-stream\", 4 \"tag\": \"latest\", 5 \"fullName\": \"image-registry.openshift-image-registry.svc:5000/default/image-stream:latest\" 6 }, [...]"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/image-scanning-by-using-the-roxctl-cli-1 |
Chapter 1. Introduction | Chapter 1. Introduction Migration Toolkit for Runtimes product will be End of Life on September 30th, 2024 All customers using this product should start their transition to Migration Toolkit for Applications . Migration Toolkit for Applications is fully backwards compatible with all features and rulesets available in Migration Toolkit for Runtimes and will be maintained in the long term. 1.1. About the Introduction to the Migration Toolkit for Runtimes This guide is for engineers, consultants, and others who want to use the Migration Toolkit for Runtimes (MTR) to migrate Java applications or other components. It provides an overview of the Migration Toolkit for Runtimes and how to get started using the tools to plan and run your migration. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/introduction_to_the_migration_toolkit_for_runtimes/introduction |
Chapter 12. Scaling Multicloud Object Gateway performance | Chapter 12. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 12.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . 12.2. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Backing Store . Select the relevant backing store and click on YAML. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory. Example reference: Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. | [
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'",
"spec: pvPool: resources: limits: cpu: 1000m memory: 4000Mi requests: cpu: 800m memory: 800Mi storage: 50Gi"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_hybrid_and_multicloud_resources/scaling-multicloud-object-gateway-performance-by-adding-endpoints__rhodf |
Chapter 12. Secret handling and connection security | Chapter 12. Secret handling and connection security Automation controller handles secrets and connections securely. 12.1. Secret handling Automation controller manages three sets of secrets: User passwords for local automation controller users. Secrets for automation controller operational use, such as database password or message bus password. Secrets for automation use, such as SSH keys, cloud credentials, or external password vault credentials. Note You must have 'local' user access for the following users: postgres awx redis receptor nginx 12.1.1. User passwords for local users Automation controller hashes local automation controller user passwords with the PBKDF2 algorithm using a SHA256 hash. Users who authenticate by external account mechanisms, such as LDAP, SAML, and OAuth, do not have any password or secret stored. 12.1.2. Secret handling for operational use The operational secrets found in automation controller are as follows: /etc/tower/SECRET_KEY : A secret key used for encrypting automation secrets in the database. If the SECRET_KEY changes or is unknown, you cannot access encrypted fields in the database. /etc/tower/tower.{cert,key} : An SSL certificate and key for the automation controller web service. A self-signed certificate or key is installed by default; you can provide a locally appropriate certificate and key. A database password in /etc/tower/conf.d/postgres.py and a message bus password in /etc/tower/conf.d/channels.py . These secrets are stored unencrypted on the automation controller server, because they are all needed to be read by the automation controller service at startup in an automated fashion. All secrets are protected by UNIX permissions, and restricted to root and the automation controller awx service user. If you need to hide these secrets, the files that these secrets are read from are interpreted by Python. You can adjust these files to retrieve these secrets by some other mechanism anytime a service restarts. This is a customer provided modification that might need to be reapplied after every upgrade. Red Hat Support and Red Hat Consulting have examples of such modifications. Note If the secrets system is down, automation controller cannot get the information and can fail in a way that is recoverable once the service is restored. Using some redundancy on that system is highly recommended. If you believe the SECRET_KEY that automation controller generated for you has been compromised and needs to be regenerated, you can run a tool from the installer that behaves much like the automation controller backup and restore tool. Important Ensure that you backup your automation controller database before you generate a new secret key. To generate a new secret key: Follow the procedure described in the Backing up and Restoring section. Use the inventory from your install (the same inventory with which you run backups and restores), and run the following command: setup.sh -k. A backup copy of the key is saved in /etc/tower/ . 12.1.3. Secret handling for automation use Automation controller stores a variety of secrets in the database that are either used for automation or are a result of automation. These secrets include the following: All secret fields of all credential types, including passwords, secret keys, authentication tokens, and secret cloud credentials. Secret tokens and passwords for external services defined automation controller settings. "password" type survey field entries. To encrypt secret fields, automation controller uses AES in CBC mode with a 256-bit key for encryption, PKCS7 padding, and HMAC using SHA256 for authentication. The encryption or decryption process derives the AES-256 bit encryption key from the SECRET_KEY , the field name of the model field and the database assigned auto-incremented record ID. Therefore, if any attribute used in the key generation process changes, the automation controller fails to correctly decrypt the secret. Automation controller is designed so that: The SECRET_KEY is never readable in playbooks that automation controller launches. These secrets are never readable by automation controller users. No secret field values are ever made available by the automation controller REST API. If a secret value is used in a playbook, it is recommended that you use no_log on the task so that it is not accidentally logged. 12.2. Connection security Automation controller allows for connections to internal services, external access, and managed nodes. Note You must have 'local' user access for the following users: postgres awx redis receptor nginx 12.2.1. Internal services Automation controller connects to the following services as part of internal operation: PostgreSQL database The connection to the PostgreSQL database is done by password authentication over TCP, either through localhost or remotely (external database). This connection can use PostgreSQL's built-in support for SSL/TLS, as natively configured by the installer support. SSL/TLS protocols are configured by the default OpenSSL configuration. A Redis key or value store The connection to Redis is over a local UNIX socket, restricted to the awx service user. 12.2.2. External access Automation controller is accessed via standard HTTP/HTTPS on standard ports, provided by Nginx. A self-signed certificate or key is installed by default; you can provide a locally appropriate certificate and key. SSL/TLS algorithm support is configured in the /etc/nginx/nginx.conf configuration file. An "intermediate" profile is used by default, that you can configure. You must reapply changes after each update. 12.2.3. Managed nodes Automation controller connects to managed machines and services as part of automation. All connections to managed machines are done by standard secure mechanisms, such as SSH, WinRM, or SSL/TLS. Each of these inherits configuration from the system configuration for the feature in question, such as the system OpenSSL configuration. | [
"setup.sh -k."
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/controller-secret-handling-and-connection-security |
Chapter 22. Performing health checks on Red Hat Quay deployments | Chapter 22. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 22.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 22.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 22.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/health-check-quay |
Chapter 11. Managing user-provisioned infrastructure manually | Chapter 11. Managing user-provisioned infrastructure manually 11.1. Adding compute machines to clusters with user-provisioned infrastructure manually You can add compute machines to a cluster on user-provisioned infrastructure either as part of the installation process or after installation. The postinstallation process requires some of the same configuration files and parameters that were used during installation. 11.1.1. Adding compute machines to Amazon Web Services To add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS), see Adding compute machines to AWS by using CloudFormation templates . 11.1.2. Adding compute machines to Microsoft Azure To add more compute machines to your OpenShift Container Platform cluster on Microsoft Azure, see Creating additional worker machines in Azure . 11.1.3. Adding compute machines to Azure Stack Hub To add more compute machines to your OpenShift Container Platform cluster on Azure Stack Hub, see Creating additional worker machines in Azure Stack Hub . 11.1.4. Adding compute machines to Google Cloud Platform To add more compute machines to your OpenShift Container Platform cluster on Google Cloud Platform (GCP), see Creating additional worker machines in GCP . 11.1.5. Adding compute machines to vSphere You can use compute machine sets to automate the creation of additional compute machines for your OpenShift Container Platform cluster on vSphere. To manually add more compute machines to your cluster, see Adding compute machines to vSphere manually . 11.1.6. Adding compute machines to RHV To add more compute machines to your OpenShift Container Platform cluster on RHV, see Adding compute machines to RHV . 11.1.7. Adding compute machines to bare metal To add more compute machines to your OpenShift Container Platform cluster on bare metal, see Adding compute machines to bare metal . 11.2. Adding compute machines to AWS by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. 11.2.1. Prerequisites You installed your cluster on AWS by using the provided AWS CloudFormation templates . You have the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. If you do not have these files, you must recreate them by following the instructions in the installation procedure . 11.2.2. Adding more compute machines to your AWS cluster by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. Important The CloudFormation template creates a stack that represents one compute machine. You must create a stack for each compute machine. Note If you do not use the provided CloudFormation template to create your compute nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You installed an OpenShift Container Platform cluster by using CloudFormation templates and have access to the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. You installed the AWS CLI. Procedure Create another compute stack. Launch the template: USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-workers . You must provide the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create compute stacks until you have created enough compute machines for your cluster. 11.2.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.3. Adding compute machines to vSphere manually You can add more compute machines to your OpenShift Container Platform cluster on VMware vSphere manually. Note You can also use compute machine sets to automate the creation of additional VMware vSphere compute machines for your cluster. 11.3.1. Prerequisites You installed a cluster on vSphere . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 11.3.2. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 11.3.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.4. Adding compute machines to a cluster on RHV In OpenShift Container Platform version 4.12, you can add more compute machines to a user-provisioned OpenShift Container Platform cluster on RHV. Prerequisites You installed a cluster on RHV with user-provisioned infrastructure. 11.4.1. Adding more compute machines to a cluster on RHV Procedure Modify the inventory.yml file to include the new workers. Run the create-templates-and-vms Ansible playbook to create the disks and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml Run the workers.yml Ansible playbook to start the virtual machines: USD ansible-playbook -i inventory.yml workers.yml CSRs for new workers joining the cluster must be approved by the administrator. The following command helps to approve all pending requests: USD oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve 11.5. Adding compute machines to bare metal You can add more compute machines to your OpenShift Container Platform cluster on bare metal. 11.5.1. Prerequisites You installed a cluster on bare metal . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . If a DHCP server is available for your user-provisioned infrastructure, you have added the details for the additional compute machines to your DHCP server configuration. This includes a persistent IP address, DNS server information, and a hostname for each machine. You have updated your DNS configuration to include the record name and IP address of each compute machine that you are adding. You have validated that DNS lookup and reverse DNS lookup resolve correctly. Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 11.5.2. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. Note You must use the same ISO image that you used to install a cluster to deploy all new nodes in a cluster. It is recommended to use the same Ignition config file. The nodes automatically upgrade themselves on the first boot before running the workloads. You can add the nodes before or after the upgrade. 11.5.2.1. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 11.5.2.2. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 11.5.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . | [
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"aws cloudformation describe-stacks --stack-name <name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"ansible-playbook -i inventory.yml create-templates-and-vms.yml",
"ansible-playbook -i inventory.yml workers.yml",
"oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/managing-user-provisioned-infrastructure-manually |
20.16.15. Guest Virtual Machine Interfaces | 20.16.15. Guest Virtual Machine Interfaces A character device presents itself to the guest virtual machine as one of the following types. To set the parallel port, use a management tool to make the following change to the domain XML ... <devices> <parallel type='pty'> <source path='/dev/pts/2'/> <target port='0'/> </parallel> </devices> ... Figure 20.60. Guest virtual machine interface Parallel Port <target> can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 parallel ports. To set the serial port use a management tool to make the following change to the domain XML: ... <devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices> ... Figure 20.61. Guest virtual machine interface serial port <target> can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 serial ports. There is also an optional type attribute, which has two choices for its value, one is isa-serial , the other is usb-serial . If type is missing, isa-serial will be used by default. For usb-serial an optional sub-element <address> with type='usb' can tie the device to a particular controller, documented above. The <console> element is used to represent interactive consoles. Depending on the type of guest virtual machine in use, the consoles might be paravirtualized devices, or they might be a clone of a serial device, according to the following rules: If no targetType attribute is set, then the default device type is according to the hypervisor's rules. The default type will be added when re-querying the XML fed into libvirt. For fully virtualized guest virtual machines, the default device type will usually be a serial port. If the targetType attribute is serial , and if no <serial> element exists, the console element will be copied to the <serial> element. If a <serial> element does already exist, the console element will be ignored. If the targetType attribute is not serial , it will be treated normally. Only the first <console> element may use a targetType of serial . Secondary consoles must all be paravirtualized. On s390, the console element may use a targetType of sclp or sclplm (line mode). SCLP is the native console type for s390. There's no controller associated to SCLP consoles. In the example below, a virtio console device is exposed in the guest virtual machine as /dev/hvc[0-7] (for more information, see http://fedoraproject.org/wiki/Features/VirtioSerial): ... <devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> ... ... <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices> ... Figure 20.62. Guest virtual machine interface - virtio console device If the console is presented as a serial port, the <target> element has the same attributes as for a serial port. There is usually only one console. | [
"<devices> <parallel type='pty'> <source path='/dev/pts/2'/> <target port='0'/> </parallel> </devices>",
"<devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices>",
"<devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-section-libvirt-dom-xml-devices-guest-interface |
6.13. Exporting and Importing Virtual Machines and Templates | 6.13. Exporting and Importing Virtual Machines and Templates Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See the Importing Existing Storage Domains section in the Red Hat Virtualization Administration Guide for information on importing storage domains. You can export virtual machines and templates from, and import them to, data centers in the same or different Red Hat Virtualization environment. You can export or import virtual machines by using an export domain, a data domain, or by using a Red Hat Virtualization host. When you export or import a virtual machine or template, properties including basic details such as the name and description, resource allocation, and high availability settings of that virtual machine or template are preserved. The permissions and user roles of virtual machines and templates are included in the OVF files, so that when a storage domain is detached from one data center and attached to another, the virtual machines and templates can be imported with their original permissions and user roles. In order for permissions to be registered successfully, the users and roles related to the permissions of the virtual machines or templates must exist in the data center before the registration process. You can also use the V2V feature to import virtual machines from other virtualization providers, such as RHEL 5 Xen or VMware, or import Windows virtual machines. V2V converts virtual machines so that they can be hosted by Red Hat Virtualization. For more information on installing and using V2V, see Converting Virtual Machines from Other Hypervisors to KVM with virt-v2v . Important Virtual machines must be shut down before being imported. 6.13.1. Exporting a Virtual Machine to the Export Domain Export a virtual machine to the export domain so that it can be imported into a different data center. Before you begin, the export domain must be attached to the data center that contains the virtual machine to be exported. Exporting a Virtual Machine to the Export Domain Click Compute Virtual Machines and select a virtual machine. Click More Actions ( ), then click Export to Export Domain . Optionally, select the following check boxes in the Export Virtual Machine window: Force Override : overrides existing images of the virtual machine on the export domain. Collapse Snapshots : creates a single export volume per disk. This option removes snapshot restore points and includes the template in a template-based virtual machine, and removes any dependencies a virtual machine has on a template. For a virtual machine that is dependent on a template, either select this option, export the template with the virtual machine, or make sure the template exists in the destination data center. Note When you create a virtual machine from a template by clicking Compute Templates and clicking New VM , you wll see two storage allocation options in the Storage Allocation section in the Resource Allocation tab: If Clone is selected, the virtual machine is not dependent on the template. The template does not have to exist in the destination data center. If Thin is selected, the virtual machine is dependent on the template, so the template must exist in the destination data center or be exported with the virtual machine. Alternatively, select the Collapse Snapshots check box to collapse the template disk and virtual disk into a single disk. To check which option was selected, click a virtual machine's name and click the General tab in the details view. Click OK . The export of the virtual machine begins. The virtual machine displays in Compute Virtual Machines with an Image Locked status while it is exported. Depending on the size of your virtual machine hard disk images, and your storage hardware, this can take up to an hour. Click the Events tab to view progress. When complete, the virtual machine has been exported to the export domain and displays in the VM Import tab of the export domain's details view. 6.13.2. Exporting a Virtual Machine to a Data Domain You can export a virtual machine to a data domain to store a clone of the virtual machine as a backup. When you export a virtual machine that is dependent on a template, the target storage domain should include that template. Note When you create a virtual machine from a template, you can choose from either of two storage allocation options: Clone : The virtual machine is not dependent on the template. The template does not have to exist in the destination storage domain. Thin : The virtual machine is dependent on the template, so the template must exist in the destination storage domain. To check which option is selected, click a virtual machine's name and click the General tab in the details view. Prerequisites The data domain is attached to a data center. The virtual machine is powered off. Procedure Click Compute Virtual Machines and select a virtual machine. Click Export . Specify a name for the exported virtual machine. Select a target storage domain from the Storage domain pop-up menu. (Optional) Check Collapse snapshots to export the virtual machine without any snapshots. Click OK . The Manager clones the virtual machine, including all its disks, to the target domain. Note When you move a disk from one type of data domain another, the disk format changes accordingly. For example, if the disk is on an NFS data domain, and it is in sparse format, then if you move the disk to an iSCSI domain its format changes to preallocated. This is different from using an export domain, because an export domain is NFS. The virtual machine appears with an Image Locked status while it is exported. Depending on the size of your virtual machine hard disk images, and your storage hardware, this can take up to an hour. Click the Events tab to view progress. When complete, the virtual machine has been exported to the data domain and appears in the list of virtual machines. Additional resources Creating a Virtual Machine Based on a Template in the Virtual Machine Management Guide 6.13.3. Importing a Virtual Machine from the Export Domain You have a virtual machine on an export domain. Before the virtual machine can be imported to a new data center, the export domain must be attached to the destination data center. Importing a Virtual Machine into the Destination Data Center Click Storage Domains and select the export domain. The export domain must have a status of Active . Click the export domain's name to go to the details view. Click the VM Import tab to list the available virtual machines to import. Select one or more virtual machines to import and click Import . Select the Target Cluster . Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Click the virtual machine to be imported and click the Disks sub-tab. From this tab, you can use the Allocation Policy and Storage Domain drop-down lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and can also select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. Click OK to import the virtual machines. The Import Virtual Machine Conflict window opens if the virtual machine exists in the virtualized environment. Choose one of the following radio buttons: Don't import Import as cloned and enter a unique name for the virtual machine in the New Name field. Optionally select the Apply to all check box to import all duplicated virtual machines with the same suffix, and then enter a suffix in the Suffix to add to the cloned VMs field. Click OK . Important During a single import operation, you can only import virtual machines that share the same architecture. If any of the virtual machines to be imported have a different architecture to that of the other virtual machines to be imported, a warning will display and you will be prompted to change your selection so that only virtual machines with the same architecture will be imported. 6.13.4. Importing a Virtual Machine from a Data Domain You can import a virtual machine into one or more clusters from a data storage domain. Prerequisite If you are importing a virtual machine from an imported data storage domain, the imported storage domain must be attached to a data center and activated. Procedure Click Storage Domains . Click the imported storage domain's name. This opens the details view. Click the VM Import tab. Select one or more virtual machines to import. Click Import . For each virtual machine in the Import Virtual Machine(s) window, ensure the correct target cluster is selected in the Cluster list. Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s): Click vNic Profiles Mapping . Select the vNIC profile to use from the Target vNic Profile drop-down list. If multiple target clusters are selected in the Import Virtual Machine(s) window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct. Click OK . If a MAC address conflict is detected, an exclamation mark appears to the name of the virtual machine. Mouse over the icon to view a tooltip displaying the type of error that occurred. Select the Reassign Bad MACs check box to reassign new MAC addresses to all problematic virtual machines. Alternatively, you can select the Reassign check box per virtual machine. Note If there are no available addresses to assign, the import operation will fail. However, in the case of MAC addresses that are outside the cluster's MAC address pool range, it is possible to import the virtual machine without reassigning a new MAC address. Click OK . The imported virtual machines no longer appear in the list under the VM Import tab. 6.13.5. Importing a Virtual Machine from a VMware Provider Import virtual machines from a VMware vCenter provider to your Red Hat Virtualization environment. You can import from a VMware provider by entering its details in the Import Virtual Machine(s) window during each import operation, or you can add the VMware provider as an external provider, and select the preconfigured provider during import operations. To add an external provider, see Adding a VMware Instance as a Virtual Machine Provider . Red Hat Virtualization uses V2V to import VMware virtual machines. For OVA files, the only disk format Red Hat Virtualization supports is VMDK. Note The virt-v2v package is not available on the ppc64le architecture and these hosts cannot be used as proxy hosts. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Prerequisites The virt-v2v package must be installed on at least one host, referred to in this procedure as the proxy host. The virt-v2v package is available by default on Red Hat Virtualization Hosts and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. At least one data and one ISO storage domain are connected to the data center. Note You can only migrate to shared storage, such as NFS, iSCSI, or FCP. Local storage is not supported. Although the ISO storage domain has been deprecated, it is required for migration. The virtio-win _version .iso image file for Windows virtual machines is uploaded to the ISO storage domain. This image includes the guest tools that are required for migrating Windows virtual machines. The virtual machine must be shut down before being imported. Starting the virtual machine through VMware during the import process can result in data corruption. An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning appears and you are prompted to change your selection to include only virtual machines with the same architecture. Procedure Click Compute Virtual Machines . Click More Actions ( ) and select Import . This opens the Import Virtual Machine(s) window. Select VMware from the Source list. If you have configured a VMware provider as an external provider, select it from the External Provider list. Verify that the provider credentials are correct. If you did not specify a destination data center or proxy host when configuring the external provider, select those options now. If you have not configured a VMware provider, or want to import from a new VMware provider, provide the following details: Select from the list the Data Center in which the virtual machine will be available. Enter the IP address or fully qualified domain name of the VMware vCenter instance in the vCenter field. Enter the IP address or fully qualified domain name of the host from which the virtual machines will be imported in the ESXi field. Enter the name of the data center and the cluster in which the specified ESXi host resides in the Data Center field. If you have exchanged the SSL certificate between the ESXi host and the Manager, leave Verify server's SSL certificate checked to verify the ESXi host's certificate. If not, clear the option. Enter the Username and Password for the VMware vCenter instance. The user must have access to the VMware data center and ESXi host on which the virtual machines reside. Select a host in the chosen data center with virt-v2v installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the VMware vCenter external provider. Click Load to list the virtual machines on the VMware provider that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Click . Note If a virtual machine's network device uses the driver type e1000 or rtl8139, the virtual machine will use the same driver type after it has been imported to Red Hat Virtualization. If required, you can change the driver type to VirtIO manually after the import. To change the driver type after a virtual machine has been imported, see Editing network interfaces . If the network device uses driver types other than e1000 or rtl8139, the driver type is changed to VirtIO automatically during the import. The Attach VirtIO-drivers option allows the VirtIO drivers to be injected to the imported virtual machine files so that when the driver is changed to VirtIO, the device will be properly detected by the operating system. Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Select the Clone check box to change the virtual machine name and MAC addresses, and clone all disks, removing all snapshots. If a virtual machine appears with a warning symbol beside its name or has a tick in the VM in System column, you must clone the virtual machine and change its name. Click each virtual machine to be imported and click the Disks sub-tab. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. If you selected the Clone check box, change the name of the virtual machine in the General sub-tab. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . 6.13.6. Exporting a Virtual Machine to a Host You can export a virtual machine to a specific path or mounted NFS shared storage on a host in the Red Hat Virtualization data center. The export will produce an Open Virtual Appliance (OVA) package. Exporting a Virtual Machine to a Host Click Compute Virtual Machines and select a virtual machine. Click More Actions ( ), then click Export to OVA . Select the host from the Host drop-down list. Enter the absolute path to the export directory in the Directory field, including the trailing slash. For example: /images2/ova/ Optionally change the default name of the file in the Name field. Click OK The status of the export can be viewed in the Events tab. 6.13.7. Importing a Virtual Machine from a Host Import an Open Virtual Appliance (OVA) file into your Red Hat Virtualization environment. You can import the file from any Red Hat Virtualization Host in the data center. Important Currently, only Red Hat Virtualization and OVAs created by VMware can be imported. KVM and Xen are not supported. The import process uses virt-v2v . Only virtual machines running operating systems compatible with virt-v2v can be successfully imported. See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7 and RHEL 8 for a current list of compatible operating systems. Importing an OVA File Copy the OVA file to a host in your cluster, in a file system location such as /var/tmp . Note The location can be a local directory or a remote NFS mount, as long as it is not in the`/root` directory or subdirectories. Ensure that it has sufficient space. Ensure that the OVA file has permissions allowing read/write access to the qemu user (UID 36) and the kvm group (GID 36): # chown 36:36 path_to_OVA_file/file.OVA Click Compute Virtual Machines . Click More Actions ( ) and select Import . This opens the Import Virtual Machine(s) window. Select Virtual Appliance (OVA) from the Source list. Select a host from the Host list. In the Path field, specify the absolute path of the OVA file. Click Load to list the virtual machine to be imported. Select the virtual machine from the Virtual Machines on Source list, and use the arrows to move it to the Virtual Machines to Import list. Click . Select the Storage Domain for the virtual machine. Select the Target Cluster where the virtual machines will reside. Select the CPU Profile for the virtual machines. Select the Allocation Policy for the virtual machines. Optionally, select the Attach VirtIO-Drivers check box and select the appropriate image on the list to add VirtIO drivers. Select the Allocation Policy for the virtual machines. Select the virtual machine, and on the General tab select the Operating System . On the Network Interfaces tab, select the Network Name and Profile Name . Click the Disks tab to view the Alias , Virtual Size , and Actual Size of the virtual machine. Click OK to import the virtual machines. 6.13.8. Importing a virtual machine from a RHEL 5 Xen host Import virtual machines from Xen on Red Hat Enterprise Linux 5 to your Red Hat Virtualization environment. Red Hat Virtualization uses V2V to import QCOW2 or raw virtual machine disk formats. The virt-v2v package must be installed on at least one host (referred to in this procedure as the proxy host). The virt-v2v package is available by default on Red Hat Virtualization Hosts (RHVH) and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. Warning If you are importing a Windows virtual machine from a RHEL 5 Xen host and you are using VirtIO devices, install the VirtIO drivers before importing the virtual machine. If the drivers are not installed, the virtual machine may not boot after import. The VirtIO drivers can be installed from the virtio-win _version .iso or the RHV-toolsSetup _version .iso . See Installing the Guest Agents and Drivers on Windows for details. If you are not using VirtIO drivers, review the configuration of the virutal machine before first boot to ensure that VirtIO devices are not being used. Note The virt-v2v package is not available on the ppc64le architecture and these hosts cannot be used as proxy hosts. Important An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning appears and you are prompted to change your selection to include only virtual machines with the same architecture. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Procedure To import a virtual machine from RHEL 5 Xen, follow these steps: Shut down the virtual machine. Starting the virtual machine through Xen during the import process can result in data corruption. Enable public key authentication between the proxy host and the RHEL 5 Xen host: Log in to the proxy host and generate SSH keys for the vdsm user. # sudo -u vdsm ssh-keygen Copy the vdsm user's public key to the RHEL 5 Xen host. # sudo -u vdsm ssh-copy-id root@ xenhost.example.com Log in to the RHEL 5 Xen host to verify that the login works correctly. # sudo -u vdsm ssh root@ xenhost.example.com Log in to the Administration Portal. Click Compute Virtual Machines . Click More Actions ( ) and select Import . This opens the Import Virtual Machine(s) window. Select the Data Center that contains the proxy host. Select XEN (via RHEL) from the Source drop-down list. Optionally, select a RHEL 5 Xen External Provider from the drop-down list. The URI will be pre-filled with the correct URI. See Adding a RHEL 5 Xen Host as a Virtual Machine Provider in the Administration Guide for more information. Enter the URI of the RHEL 5 Xen host. The required format is pre-filled; you must replace <hostname> with the host name of the RHEL 5 Xen host. Select the proxy host from the Proxy Host drop-down list. Click Load to list the virtual machines on the RHEL 5 Xen host that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Note Due to current limitations, Xen virtual machines with block devices do not appear in the Virtual Machines on Source list. They must be imported manually. See Importing Block Based Virtual Machine from Xen host . Click . Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and select the storage domain on which the disk will be stored. Note The target storage domain must be a file-based domain. Due to current limitations, specifying a block-based domain causes the V2V operation to fail. If a virtual machine appears with a warning symbol beside its name, or has a tick in the VM in System column, select the Clone check box to clone the virtual machine. Note Cloning a virtual machine changes its name and MAC addresses and clones all of its disks, removing all snapshots. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . Importing a Block-Based Virtual Machine from a RHEL 5 Xen Host Enable public key authentication between the proxy host and the RHEL 5 Xen host: Log in to the proxy host and generate SSH keys for the vdsm user. # sudo -u vdsm ssh-keygen Copy the vdsm user's public key to the RHEL 5 Xen host. # sudo -u vdsm ssh-copy-id root@ xenhost.example.com Log in to the RHEL 5 Xen host to verify that the login works correctly. # sudo -u vdsm ssh root@ xenhost.example.com Attach an export domain. See Attaching an Existing Export Domain to a Data Center in the Administration Guide for details. On the proxy host, copy the virtual machine from the RHEL 5 Xen host: # virt-v2v-copy-to-local -ic xen+ssh://root@ xenhost.example.com vmname Convert the virtual machine to libvirt XML and move the file to your export domain: # virt-v2v -i libvirtxml vmname .xml -o rhev -of raw -os storage.example.com:/exportdomain In the Administration Portal, click Storage Domains , click the export domain's name, and click the VM Import tab in the details view to verify that the virtual machine is in your export domain. Import the virtual machine into the destination data domain. See Importing the virtual machine from the export domain for details. 6.13.9. Importing a Virtual Machine from a KVM Host Import virtual machines from KVM to your Red Hat Virtualization environment. Red Hat Virtualization converts KVM virtual machines to the correct format before they are imported. You must enable public key authentication between the KVM host and at least one host in the destination data center (this host is referred to in the following procedure as the proxy host). Warning The virtual machine must be shut down before being imported. Starting the virtual machine through KVM during the import process can result in data corruption. Important An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning appears and you are prompted to change your selection to include only virtual machines with the same architecture. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Importing a Virtual Machine from KVM Enable public key authentication between the proxy host and the KVM host: Log in to the proxy host and generate SSH keys for the vdsm user. # sudo -u vdsm ssh-keygen Copy the vdsm user's public key to the KVM host. The proxy host's known_hosts file will also be updated to include the host key of the KVM host. # sudo -u vdsm ssh-copy-id root@ kvmhost.example.com Log in to the KVM host to verify that the login works correctly. # sudo -u vdsm ssh root@ kvmhost.example.com Log in to the Administration Portal. Click Compute Virtual Machines . Click More Actions ( ) and select Import . This opens the Import Virtual Machine(s) window. Select the Data Center that contains the proxy host. Select KVM (via Libvirt) from the Source drop-down list. Optionally, select a KVM provider External Provider from the drop-down list. The URI will be pre-filled with the correct URI. See Adding a KVM Host as a Virtual Machine Provider in the Administration Guide for more information. Enter the URI of the KVM host in the following format: qemu+ssh://root@ kvmhost.example.com /system Keep the Requires Authentication check box selected. Enter root in the Username field. Enter the Password of the KVM host's root user. Select the Proxy Host from the drop-down list. Click Load to list the virtual machines on the KVM host that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Click . Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Optionally, select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Optionally, select the Clone check box to change the virtual machine name and MAC addresses, and clone all disks, removing all snapshots. If a virtual machine appears with a warning symbol beside its name or has a tick in the VM in System column, you must clone the virtual machine and change its name. Click each virtual machine to be imported and click the Disks sub-tab. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thin provisioned or preallocated, and select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. See Virtual Disk Storage Allocation Policies in the Technical Reference for more information. If you selected the Clone check box, change the name of the virtual machine in the General tab. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . 6.13.10. Importing a Red Hat KVM Guest Image You can import a Red Hat-provided KVM virtual machine image. This image is a virtual machine snapshot with a preconfigured instance of Red Hat Enterprise Linux installed. You can configure this image with the cloud-init tool, and use it to provision new virtual machines. This eliminates the need to install and configure the operating system and provides virtual machines that are ready for use. Procedure Download the most recent KVM virtual machine image from the Download Red Hat Enterprise Linux list, in the Product Software tab. Upload the virtual machine image using the Manager or the REST API. See Uploading Images to a Data Storage Domain in the Administration Guide . Create a new virtual machine and attach the uploaded disk image to it. See Creating a Linux virtual machine . Optionally, use cloud-init to configure the virtual machine. See Using Cloud-Init to Automate the Configuration of Virtual Machines for details. Optionally, create a template from the virtual machine. You can generate new virtual machines from this template. See Templates for information about creating templates and generating virtual machines from templates. | [
"chown 36:36 path_to_OVA_file/file.OVA",
"sudo -u vdsm ssh-keygen",
"sudo -u vdsm ssh-copy-id root@ xenhost.example.com",
"sudo -u vdsm ssh root@ xenhost.example.com",
"sudo -u vdsm ssh-keygen",
"sudo -u vdsm ssh-copy-id root@ xenhost.example.com",
"sudo -u vdsm ssh root@ xenhost.example.com",
"virt-v2v-copy-to-local -ic xen+ssh://root@ xenhost.example.com vmname",
"virt-v2v -i libvirtxml vmname .xml -o rhev -of raw -os storage.example.com:/exportdomain",
"sudo -u vdsm ssh-keygen",
"sudo -u vdsm ssh-copy-id root@ kvmhost.example.com",
"sudo -u vdsm ssh root@ kvmhost.example.com",
"qemu+ssh://root@ kvmhost.example.com /system"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Exporting_and_Importing_Virtual_Machines_and_Templates |
13.2. Creating Standard Indexes | 13.2. Creating Standard Indexes This section describes how to create presence, equality, approximate, substring, and international indexes for specific attributes using the command line and the web console. Note When you create a new index type, Directory Server uses this default index as a template for each new database that will be created in future. If you update the default index, the updated settings are not applied to existing databases. To apply a new index to an existing database, use the dsctl db2index command or a cn=index,cn=tasks task, as described in Section 13.3, "Creating New Indexes to Existing Databases" . Section 13.2.2, "Creating Indexes Using the Web Console" Section 13.2.1, "Creating Indexes Using the Command Line" 13.2.1. Creating Indexes Using the Command Line Note You cannot create new system indexes because system indexes are hard-coded in Directory Server. Use ldapmodify to add the new index attributes to your directory. To create a new index that will become one of the default indexes, add the new index attributes to the cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config entry. To create a new index for a particular database, add it to the cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config entry, where cn= database_name corresponds to the name of the database. Note Avoid creating entries under cn=config in the dse.ldif file. The cn=config entry in the dse.ldif configuration file is not stored in the same highly scalable database as regular entries. As a result, if many entries, particularly entries that are likely to be updated frequently, are stored under cn=config , performance will probably suffer. Although we recommend you do not store simple user entries under cn=config for performance reasons, it can be useful to store special user entries such as the Directory Manager entry or replication manager (supplier bind DN) entry under cn=config since this centralizes configuration information. For information on the LDIF update statements required to add entries, see Section 3.1.4, "Updating a Directory Entry" . For example, to create presence, equality, and substring indexes for the sn (surname) attribute in the Example1 database: Run ldapmodify and add the LDIF entry for the new indexes: The cn attribute contains the name of the attribute to index, in this example the sn attribute. The entry is a member of the nsIndex object class. The nsSystemIndex attribute is false , indicating that the index is not essential to Directory Server operations. The multi-valued nsIndexType attribute specifies the presence ( pres ), equality ( eq ) and substring ( sub ) indexes. Each keyword has to be entered on a separate line. The nsMatchingRule attribute in the example specifies the OID of the Bulgarian collation order; the matching rule can indicate any possible value match, such as languages or other formats like date or integer. You can use the keyword none in the nsIndexType attribute to specify that no indexes are to be maintained for the attribute. This example temporarily disables the sn indexes on the Example1 database by changing the nsIndexType to none : For a complete list of matching rules and their OIDs, see Section 14.3.4, "Using Matching Rules" , and for the index configuration attributes, see the Red Hat Directory Server Configuration, Command, and File Reference . Note Always use the attribute's primary name (not the attribute's alias) when creating indexes. The primary name of the attribute is the first name listed for the attribute in the schema; for example, uid for the user ID attribute. 13.2.2. Creating Indexes Using the Web Console To create presence, equality, approximate, substring, or international indexes: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Open the Indexes tab. Click the Add Index button. Select the attribute to index, the type of index, and optionally a matching rule. Click Create Index . | [
"ldapmodify -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=sn,cn=index,cn=Example1,cn=ldbm database,cn=plugins,cn=config changetype: add objectClass:top objectClass:nsIndex cn:sn nsSystemIndex:false nsIndexType:pres nsIndexType:eq nsIndexType:sub nsMatchingRule:2.16.840.1.113730.3.3.2.3.1",
"dn: cn=sn,cn=index,cn=Example1,cn=ldbm database,cn=plugins,cn=config objectClass:top objectClass:nsIndex cn:sn nsSystemIndex:false nsIndexType:none"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Managing_Indexes-Creating_Indexes |
Chapter 45. Understanding the eBPF networking features in RHEL 8 | Chapter 45. Understanding the eBPF networking features in RHEL 8 The extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space. This code runs in a restricted sandbox environment with access only to a limited set of functions. In networking, you can use eBPF to complement or replace kernel packet processing. Depending on the hook you use, eBPF programs have, for example: Read and write access to packet data and metadata Can look up sockets and routes Can set socket options Can redirect packets 45.1. Overview of networking eBPF features in RHEL 8 You can attach extended Berkeley Packet Filter (eBPF) networking programs to the following hooks in RHEL: eXpress Data Path (XDP): Provides early access to received packets before the kernel networking stack processes them. tc eBPF classifier with direct-action flag: Provides powerful packet processing on ingress and egress. Control Groups version 2 (cgroup v2): Enables filtering and overriding socket-based operations performed by programs in a control group. Socket filtering: Enables filtering of packets received from sockets. This feature was also available in the classic Berkeley Packet Filter (cBPF), but has been extended to support eBPF programs. Stream parser: Enables splitting up streams to individual messages, filtering, and redirecting them to sockets. SO_REUSEPORT socket selection: Provides a programmable selection of a receiving socket from a reuseport socket group. Flow dissector: Enables overriding the way the kernel parses packet headers in certain situations. TCP congestion control callbacks: Enables implementing a custom TCP congestion control algorithm. Routes with encapsulation: Enables creating custom tunnel encapsulation. Note that Red Hat does not support all of the eBPF functionality that is available in RHEL and described here. For further details and the support status of the individual hooks, see the RHEL 8 Release Notes and the following overview. XDP You can attach programs of the BPF_PROG_TYPE_XDP type to a network interface. The kernel then executes the program on received packets before the kernel network stack starts processing them. This allows fast packet forwarding in certain situations, such as fast packet dropping to prevent distributed denial of service (DDoS) attacks and fast packet redirects for load balancing scenarios. You can also use XDP for different forms of packet monitoring and sampling. The kernel allows XDP programs to modify packets and to pass them for further processing to the kernel network stack. The following XDP modes are available: Native (driver) XDP: The kernel executes the program from the earliest possible point during packet reception. At this moment, the kernel did not parse the packet and, therefore, no metadata provided by the kernel is available. This mode requires that the network interface driver supports XDP but not all drivers support this native mode. Generic XDP: The kernel network stack executes the XDP program early in the processing. At that time, kernel data structures have been allocated, and the packet has been pre-processed. If a packet should be dropped or redirected, it requires a significant overhead compared to the native mode. However, the generic mode does not require network interface driver support and works with all network interfaces. Offloaded XDP: The kernel executes the XDP program on the network interface instead of on the host CPU. Note that this requires specific hardware, and only certain eBPF features are available in this mode. On RHEL, load all XDP programs using the libxdp library. This library enables system-controlled usage of XDP. Note Currently, there are some system configuration limitations for XDP programs. For example, you must disable certain hardware offload features on the receiving interface. Additionally, not all features are available with all drivers that support the native mode. In RHEL 8.7, Red Hat supports the XDP feature only if all of the following conditions apply: You load the XDP program on an AMD or Intel 64-bit architecture. You use the libxdp library to load the program into the kernel. The XDP program does not use the XDP hardware offloading. Additionally, Red Hat provides the following usage of XDP features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP hardware offloading. AF_XDP Using an XDP program that filters and redirects packets to a given AF_XDP socket, you can use one or more sockets from the AF_XDP protocol family to quickly copy packets from the kernel to the user space. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Traffic Control The Traffic Control ( tc ) subsystem offers the following types of eBPF programs: BPF_PROG_TYPE_SCHED_CLS BPF_PROG_TYPE_SCHED_ACT These types enable you to write custom tc classifiers and tc actions in eBPF. Together with the parts of the tc ecosystem, this provides the ability for powerful packet processing and is the core part of several container networking orchestration solutions. In most cases, only the classifier is used, as with the direct-action flag, the eBPF classifier can execute actions directly from the same eBPF program. The clsact Queueing Discipline ( qdisc ) has been designed to enable this on the ingress side. Note that using a flow dissector eBPF program can influence operation of some other qdiscs and tc classifiers, such as flower . The eBPF for tc feature is fully supported in RHEL 8.2 and later. Socket filter Several utilities use or have used the classic Berkeley Packet Filter (cBPF) for filtering packets received on a socket. For example, the tcpdump utility enables the user to specify expressions, which tcpdump then translates into cBPF code. As an alternative to cBPF, the kernel allows eBPF programs of the BPF_PROG_TYPE_SOCKET_FILTER type for the same purpose. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Control Groups In RHEL, you can use multiple types of eBPF programs that you can attach to a cgroup. The kernel executes these programs when a program in the given cgroup performs an operation. Note that you can use only cgroups version 2. The following networking-related cgroup eBPF programs are available in RHEL: BPF_PROG_TYPE_SOCK_OPS : The kernel calls this program on various TCP events. The program can adjust the behavior of the kernel TCP stack, including custom TCP header options, and so on. BPF_PROG_TYPE_CGROUP_SOCK_ADDR : The kernel calls this program during connect , bind , sendto , recvmsg , getpeername , and getsockname operations. This program allows changing IP addresses and ports. This is useful when you implement socket-based network address translation (NAT) in eBPF. BPF_PROG_TYPE_CGROUP_SOCKOPT : The kernel calls this program during setsockopt and getsockopt operations and allows changing the options. BPF_PROG_TYPE_CGROUP_SOCK : The kernel calls this program during socket creation, socket releasing, and binding to addresses. You can use these programs to allow or deny the operation, or only to inspect socket creation for statistics. BPF_PROG_TYPE_CGROUP_SKB : This program filters individual packets on ingress and egress, and can accept or reject packets. BPF_CGROUP_INET4_GETPEERNAME , BPF_CGROUP_INET6_GETPEERNAME , BPF_CGROUP_INET4_GETSOCKNAME , and BPF_CGROUP_INET6_GETSOCKNAME : Using these programs, you can override the result of getsockname and getpeername system calls. This is useful when you implement socket-based network address translation (NAT) in eBPF. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Stream Parser A stream parser operates on a group of sockets that are added to a special eBPF map. The eBPF program then processes packets that the kernel receives or sends on those sockets. The following stream parser eBPF programs are available in RHEL: BPF_PROG_TYPE_SK_SKB : An eBPF program parses packets received from the socket into individual messages, and instructs the kernel to drop those messages or send them to another socket in the group. BPF_PROG_TYPE_SK_MSG : This program filters egress messages. An eBPF program parses the packets into individual messages and either approves or rejects them. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. SO_REUSEPORT socket selection Using this socket option, you can bind multiple sockets to the same IP address and port. Without eBPF, the kernel selects the receiving socket based on a connection hash. With the BPF_PROG_TYPE_SK_REUSEPORT program, the selection of the receiving socket is fully programmable. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Flow dissector When the kernel needs to process packet headers without going through the full protocol decode, they are dissected . For example, this happens in the tc subsystem, in multipath routing, in bonding, or when calculating a packet hash. In this situation the kernel parses the packet headers and fills internal structures with the information from the packet headers. You can replace this internal parsing using the BPF_PROG_TYPE_FLOW_DISSECTOR program. Note that you can only dissect TCP and UDP over IPv4 and IPv6 in eBPF in RHEL. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. TCP Congestion Control You can write a custom TCP congestion control algorithm using a group of BPF_PROG_TYPE_STRUCT_OPS programs that implement struct tcp_congestion_oops callbacks. An algorithm that is implemented this way is available to the system alongside the built-in kernel algorithms. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Routes with encapsulation You can attach one of the following eBPF program types to routes in the routing table as a tunnel encapsulation attribute: BPF_PROG_TYPE_LWT_IN BPF_PROG_TYPE_LWT_OUT BPF_PROG_TYPE_LWT_XMIT The functionality of such an eBPF program is limited to specific tunnel configurations and does not allow creating a generic encapsulation or decapsulation solution. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. Socket lookup To bypass limitations of the bind system call, use an eBPF program of the BPF_PROG_TYPE_SK_LOOKUP type. Such programs can select a listening socket for new incoming TCP connections or an unconnected socket for UDP packets. In RHEL 8.7, Red Hat provides this feature as an unsupported Technology Preview. 45.2. Overview of XDP features in RHEL 8 by network cards The following is an overview of XDP-enabled network cards and the XDP features you can use with them: Network card Driver Basic Redirect Target HW offload Zero-copy Amazon Elastic Network Adapter ena yes yes yes [a] no no Broadcom NetXtreme-C/E 10/25/40/50 gigabit Ethernet bnxt_en yes yes yes [a] no no Cavium Thunder Virtual function nicvf yes no no no no Google Virtual NIC (gVNIC) support gve yes yes yes no yes Intel(R) 10GbE PCI Express Virtual Function Ethernet ixgbevf yes no no no no Intel(R) 10GbE PCI Express adapters ixgbe yes yes yes [a] no yes Intel(R) Ethernet Connection E800 Series ice yes yes yes [a] no yes Intel(R) Ethernet Controller I225-LM/I225-V family igc yes yes yes no yes Intel(R) Ethernet Controller XL710 Family i40e yes yes yes [a] [b] no yes Intel(R) PCI Express Gigabit adapters igb yes yes yes [a] no no Mellanox 5th generation network adapters (ConnectX series) mlx5_core yes yes yes [b] no yes Mellanox Technologies 1/10/40Gbit Ethernet mlx4_en yes yes no no no Microsoft Azure Network Adapter mana yes yes yes no no Microsoft Hyper-V virtual network hv_netvsc yes yes yes no no Netronome(R) NFP4000/NFP6000 NIC nfp yes no no yes no QEMU Virtio network virtio_net yes yes yes [a] no no QLogic QED 25/40/100Gb Ethernet NIC qede yes yes yes no no Solarflare SFC9000/SFC9100/EF100-family sfc yes yes yes [b] no no Universal TUN/TAP device tun yes yes yes no no Virtual Ethernet pair device veth yes yes yes no no [a] Only if an XDP program is loaded on the interface. [b] Requires several XDP TX queues allocated that are larger or equal to the largest CPU index. Legend: Basic: Supports basic return codes: DROP , PASS , ABORTED , and TX . Redirect: Supports the REDIRECT return code. Target: Can be a target of a REDIRECT return code. HW offload: Supports XDP hardware offload. Zero-copy: Supports the zero-copy mode for the AF_XDP protocol family. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_understanding-the-ebpf-features-in-rhel-8_configuring-and-managing-networking |
Deploying OpenShift Data Foundation using IBM Power | Deploying OpenShift Data Foundation using IBM Power Red Hat OpenShift Data Foundation 4.18 Instructions on deploying Red Hat OpenShift Data Foundation on IBM Power Red Hat Storage Documentation Team | [
"oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"oc get nodes -l cluster.ocs.openshift.io/openshift-storage=",
"NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf",
"oc debug node/<node name>",
"oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part",
"get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_using_ibm_power/index |
Chapter 3. Feature enhancements | Chapter 3. Feature enhancements Cryostat 2.2 includes feature enhancements that build upon the Cryostat 2.1 offerings. Automated rule behavior after JMX credential change Before Cryostat 2.2, if you created an automated rule to monitor a target JVM application before you enter the application's JMX credentials on the Security menu, the automated rule would fail without any warning. Cryostat 2.2 resolves this issue. With this update, you can enter JMX credentials after you created the automated rule without experiencing issues. You do not need to re-create the automated rule, because Cryostat retries the automated rule after you enter the correct JMX credentials for the application. Note Cryostat 2.2 encrypts and stores JMX credentials for a target JVM application in a database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. The Cryostat Operator stores JMX credentials in memory for the duration of establishing a connection between Cryostat and the target JVM application. Cryostat supports SSL/TLS on the HTTP request that adds JMX credentials to the database and on the JMX connection that uses those credentials to connect to the target application. Cryostat also encrypts the credentials within the database by using a passphrase that is either provided by the user or that is generated by the Cryostat Operator. Deletion prompt on the Cryostat web console Cryostat 2.2 updates the Delete function on the Cryostat web console, so that after you click Delete , the following prompt opens on your web console: Figure 3.1. The Delete prompt that opens on the Cryostat web console You can access this prompt when you complete any of the following delete operations on the web console: An automated rule from the Automated Rules menu. An active or an archived recording from either the Recordings menu or the Archives menu. An event template, an event type, or a custom target from numerous locations on the Cryostat web console, such as the Events menu. JMX credentials from the Security menu. Important The prompt does not open when you attempt to delete recording labels for either your active recording or archived recording. The prompt informs you that after you delete the recording, Cryostat removes all data associated with the recording. The prompt provides you with the following options: To proceed with deleting the recording, click Delete . To keep the recording, click Cancel . After you click either option, Cryostat returns to the menu. Edit Recording Labels pane Cryostat 2.2 removes the Edit Metadata option from the Recordings menu, and adds an Edit Labels button under the Active Recordings tab and the Archived Recordings tab. Figure 3.2. The Edit Recordings Labels pane on the Cryostat web console After you click the Edit Labels button, an Editing Recordings Labels pane opens on your Cryostat web console. Enhancement to the Archives menu The Archives menu separates recordings into three nested tables: All Targets , All Archives , and Uploads . Each table lists results in a chronological order. After you archive a recording, Cryostat lists the associated target JVM in the All Targets table. You can click the Hide targets with zero recordings checkbox to remove any target JVM entry that does not have an archived recording. After you click on the twistie icon ( v ) beside the JVM target entry, you can access a filter function, where you can edit labels to enhance your filter or click the Delete button to remove the filter. The All Archives table looks similar to the All Targets table, but the All Archives table lists target JVM applications from files that Cryostat archived. From the Uploads table, you can view all your uploaded JFR recordings. The Uploads table also includes a filtering mechanism similar to the All Targets table. You can also use the filtering mechanism on the Archives menu to find an archived file that might have no recognizable target JVM application. Fixed issue with Archive Recordings table Cryostat 2.2 fixes the Cryostat 2.1 issue, where Cryostat would delete one of the duplicate files from under the Archive Recordings table. This issue is caused when you complete the following steps: Archived a JFR recording that belongs to a specific target. As an example, Cryostat names the archived file as my_recording_9093_20220322T172832Z.jfr . Archived the same JFR recording again or uploaded the same file to Cryostat's archives location. Cryostat might remove one of the files in error. You would notice the incorrect deleted file when you complete one of the following actions: View a generated automated analysis report. View application metrics on Grafana. Edit a recording label for the existing JFR recording. If Cryostat 2.2 detects JFR recordings files with the same name, but each file has a different target, Cryostat does not delete one of the files. This behavior also applies when you re-upload a file with the same name as the archived file belonging to a target JVM application. For more information about the source of the issue, see Duplicate file name displays under the Archived Recordings table (Release notes for the Red Hat build of Cryostat 2.1). Filter recordings From either the Active Recordings tab or the Archived Recordings tab in the Recordings menu, you can filter listed JFR recordings by selecting checkboxes that open beside each JFR recording entry. After you click a checkbox, Cryostat enables buttons, such as Create , Archive , Edit Labels , and so on. With Cryostat's filtering functionality, you can create a filter that accurately finds your target JFR recordings. The following image shows an Active Recordings table with three listed active recordings: Figure 3.3. Example of an Active Recordings table that shows three listed active recordings The following example shows a filter with defined template.type:TARGET label and the DurationSeconds: continuous label criteria. After the filter query completes, two results show that match the filter's label criteria. Figure 3.4. Example of a completed filter under the Active Recordings tab OpenJDK 17 and Eclipse Vert.x support Cryostat 2.2 is built with the Vert.x 4 framework. This framework improves performance, fixes legacy bugs, and builds new features for Cryostat. Additionally, Cryostat 2.2 is built with OpenJDK 17. This OpenJDK implementation improves performance and reduces memory requirements for Cryostat. Note If you run your Cryostat application on OpenJDK 17, Cryostat can still interact with a target JVM application that was built with a different release of Cryostat that supports the JFR technology, such as OpenJDK 11.0.17. The initialDelay automated rule property Before the Cryostat 2.2 release, if you created an automated rule that copies recordings into Cryostat's archives, you would need to create a schedule by specifying a value for the archivalPeriodSeconds property. This configuration limits an automated rule to only move a recording copy to archives based at specific time intervals. You cannot stagger the archival period with this archivalPeriodSeconds property. Cryostat 2.2 includes the initialDelay property, which you can define in rule definition. Your automated rule can then stagger the archival schedule to meet your needs. Consider a situation where you would like to immediately archive a recording during application startup. Thereafter, the archival interval could be scheduled to occur every 30 seconds. Updated Archived Recordings table Cryostat 2.2 updates the Archived Recording table that opens on the Recordings menu. The table now includes a Size column, where you can view an archived file size in kilobytes (KB) units. Additionally, with this release, you can use a scroll bar on the Archived Recording table to quickly locate an archived file. The scroll bar visibility is unique to each table type. For example, from the Active Recordings table, you can access the scroll bar when the table lists 5 or more active JFR recording files. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.2/cryostat-feature-enhancements._cryostat |
Chapter 3. Enabling Linux control group version 1 (cgroup v1) | Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.14 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes | [
"apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_configuration/enabling-cgroup-v1 |
Chapter 16. Concepts to automate Data Grid CLI commands | Chapter 16. Concepts to automate Data Grid CLI commands When interacting with an external Data Grid in Kubernetes, the Batch CR allows you to automate this using standard oc commands. 16.1. When to use it Use this when automating interactions on Kubernetes. This avoids providing usernames and passwords and checking shell script outputs and their status. For human interactions, the CLI shell might still be a better fit. 16.2. Example The following Batch CR takes a site offline as described in the operational procedure Switch over to the secondary site . apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: take-offline namespace: keycloak 1 spec: cluster: infinispan 2 config: | 3 site take-offline --all-caches --site=site-a site status --all-caches --site=site-a 1 The Batch CR must be created in the same namespace as the Data Grid deployment. 2 The name of the Infinispan CR. 3 A multiline string containing one or more Data Grid CLI commands. Once the CR has been created, wait for the status to show the completion. oc -n keycloak wait --for=jsonpath='{.status.phase}'=Succeeded Batch/take-offline Note Modifying a Batch CR instance has no effect. Batch operations are "one-time" events that modify Infinispan resources. To update .spec fields for the CR, or when a batch operation fails, you must create a new instance of the Batch CR. 16.3. Further reading For more information, see the Data Grid Operator Batch CR documentation . | [
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: take-offline namespace: keycloak 1 spec: cluster: infinispan 2 config: | 3 site take-offline --all-caches --site=site-a site status --all-caches --site=site-a",
"-n keycloak wait --for=jsonpath='{.status.phase}'=Succeeded Batch/take-offline"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/concepts-infinispan-cli-batch- |
Chapter 18. Azure Storage Queue Source | Chapter 18. Azure Storage Queue Source Receive Messages from Azure Storage queues. Important The Azure Storage Queue Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 18.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-queue-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Queue access Key. string accountName * Account Name The Azure Storage Queue account name. string queueName * Queue Name The Azure Storage Queue container name. string maxMessages Maximum Messages Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. By default it will consider 1 message to be retrieved, the allowed range is 1 to 32 messages. int 1 Note Fields marked with an asterisk (*) are mandatory. 18.2. Dependencies At runtime, the azure-storage-queue-source Kamelet relies upon the presence of the following dependencies: camel:azure-storage-queue camel:kamelet 18.3. Usage This section describes how you can use the azure-storage-queue-source . 18.3.1. Knative Source You can use the azure-storage-queue-source Kamelet as a Knative source by binding it to a Knative object. azure-storage-queue-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-source properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 18.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 18.3.1.2. Procedure for using the cluster CLI Save the azure-storage-queue-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f azure-storage-queue-source-binding.yaml 18.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind azure-storage-queue-source -p "source.accessKey=The Access Key" -p "source.accountName=The Account Name" -p "source.queueName=The Queue Name" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 18.3.2. Kafka Source You can use the azure-storage-queue-source Kamelet as a Kafka source by binding it to a Kafka topic. azure-storage-queue-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-source properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 18.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 18.3.2.2. Procedure for using the cluster CLI Save the azure-storage-queue-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f azure-storage-queue-source-binding.yaml 18.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind azure-storage-queue-source -p "source.accessKey=The Access Key" -p "source.accountName=The Account Name" -p "source.queueName=The Queue Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 18.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-queue-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-source properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" queueName: \"The Queue Name\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f azure-storage-queue-source-binding.yaml",
"kamel bind azure-storage-queue-source -p \"source.accessKey=The Access Key\" -p \"source.accountName=The Account Name\" -p \"source.queueName=The Queue Name\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-source properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" queueName: \"The Queue Name\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f azure-storage-queue-source-binding.yaml",
"kamel bind azure-storage-queue-source -p \"source.accessKey=The Access Key\" -p \"source.accountName=The Account Name\" -p \"source.queueName=The Queue Name\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/azure_storage_queue_source |
26.5. Allowing IdM to Start with Expired Certificates | 26.5. Allowing IdM to Start with Expired Certificates After the IdM administrative server certificates expire, most IdM services become inaccessible. You can configure the underlying Apache and LDAP services to allow SSL access to the services even if the certificates are expired. If you allow limited access with expired certificates: Apache, Kerberos, DNS, and LDAP services will continue working. With these services active, users will be able to log in to the IdM domain. Client services that require SSL for access will still fail. For example, sudo will fail because it requires SSSD on IdM clients, and SSSD needs SSL to contact IdM. Important This procedure is intended only as a temporary workaround. Renew the required certificates as quickly as possible, and then revert the described changes. Configure the mod_nss module for the Apache server to not enforce valid certificates. Open the /etc/httpd/conf.d/nss.conf file. Set the NSSEnforceValidCerts parameter to off : Restart Apache. Make sure that validity checks are disabled for the LDAP directory server. To do this, verify that the nsslapd-validate-cert attribute is set to warn : If the attribute is not set to warn , change it: Restart the directory server. | [
"NSSEnforceValidCerts off",
"systemctl restart httpd.service",
"ldapsearch -h server.example.com -p 389 -D \"cn=directory manager\" -w secret -LLL -b cn=config -s base \"(objectclass=*)\" nsslapd-validate-cert dn: cn=config nsslapd-validate-cert: warn",
"ldapmodify -D \"cn=directory manager\" -w secret -p 389 -h server.example.com dn: cn=config changetype: modify replace: nsslapd-validate-cert nsslapd-validate-cert: warn",
"systemctl restart dirsrv.target"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/expired-certs |
Chapter 2. Differences from upstream OpenJDK 11 | Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.10/rn-openjdk-diff-from-upstream |
Chapter 3. Configuring networking | Chapter 3. Configuring networking Each provisioning type requires some network configuration. Use this chapter to configure network services in your integrated Capsule on Satellite Server. New hosts must have access to your Capsule Server. Capsule Server can be either your integrated Capsule on Satellite Server or an external Capsule Server. You might want to provision hosts from an external Capsule Server when the hosts are on isolated networks and cannot connect to Satellite Server directly, or when the content is synchronized with Capsule Server. Provisioning by using Capsule Servers can save on network bandwidth. Configuring Capsule Server has two basic requirements: Configuring network services. This includes: Content delivery services Network services (DHCP, DNS, and TFTP) Puppet configuration Defining network resource data in Satellite Server to help configure network interfaces on new hosts. The following instructions have similar applications to configuring standalone Capsules managing a specific network. To configure Satellite to use external DHCP, DNS, and TFTP services, see Configuring External Services in Installing Satellite Server in a connected network environment . 3.1. Facts and NIC filtering Facts describe aspects such as total memory, operating system version, or architecture as reported by the host. You can find facts in Monitor > Facts and search hosts through facts or use facts in templates. Satellite collects facts from multiple sources: subscription manager ansible puppet Satellite is an inventory system for hosts and network interfaces. For hypervisors or container hosts, adding thousands of interfaces per host and updating the inventory every few minutes is inadequate. For each individual NIC reported, Satellite creates a NIC entry and those entries are never removed from the database. Parsing all the facts and comparing all records in the database makes Satellite extremely slow and unusable. To optimize the performance of various actions, most importantly fact import, you can use the options available on the Facts tab under Administer > Settings . 3.2. Optimizing performance by removing NICs from database Filter and exclude the connections using the Exclude pattern for facts stored in Satellite and Ignore interfaces with matching identifier option. By default, these options are set to most common hypervisors. If you name the virtual interfaces differently, you can update this filter to use it according to your requirements. Procedure In the Satellite web UI, navigate to Administer > Settings and select the Facts tab. To filter out all interfaces starting with specific names, for example, blu , add blu* to the Ignore interfaces with matching identifier option. To prevent databases from storing facts related to interfaces starting with specific names, for example, blu , add blu* to the Exclude pattern for facts stored in Satellite option. By default, it contains the same list as the Ignore interfaces with matching identifier option. You can override it based on the your requirements. This filters out facts completely without storing them. To remove facts from the database, enter the following command: This command removes all facts matching with the filter added in Administer > Settings > Facts > the Exclude pattern for facts stored in Satellite option. To remove interfaces from the database, enter the following command: This command removes all interfaces matching with the filter added in Administer > Settings > Facts > the Ignore interfaces with matching identifier option. 3.3. Network resources Satellite contains networking resources that you must set up and configure to create a host. It includes the following networking resources: Domain You must assign every host that is managed by Satellite to a domain. Using the domain, Satellite can manage A, AAAA, and PTR records. Even if you do not want Satellite to manage your DNS servers, you still must create and associate at least one domain. Domains are included in the naming conventions Satellite hosts, for example, a host with the name test123 in the example.com domain has the fully qualified domain name test123.example.com . Subnet You must assign every host managed by Satellite to a subnet. Using subnets, Satellite can then manage IPv4 reservations. If there are no reservation integrations, you still must create and associate at least one subnet. When you manage a subnet in Satellite, you cannot create DHCP records for that subnet outside of Satellite. In Satellite, you can use IP Address Management (IPAM) to manage IP addresses with one of the following options: DHCP : DHCP Capsule manages the assignment of IP addresses by finding the available IP address starting from the first address of the range and skipping all addresses that are reserved. Before assigning an IP address, Capsule sends an ICMP and TCP pings to check whether the IP address is in use. Note that if a host is powered off, or has a firewall configured to disable connections, Satellite makes a false assumption that the IP address is available. This check does not work for hosts that are turned off, therefore, the DHCP option can only be used with subnets that Satellite controls and that do not have any hosts created externally. The Capsule DHCP module retains the offered IP addresses for a short period of time to prevent collisions during concurrent access, so some IP addresses in the IP range might remain temporarily unused. Internal DB : Satellite finds the available IP address from the Subnet range by excluding all IP addresses from the Satellite database in sequence. The primary source of data is the database, not DHCP reservations. This IPAM is not safe when multiple hosts are being created in parallel; in that case, use DHCP or Random DB IPAM instead. Random DB : Satellite finds the available IP address from the Subnet range by excluding all IP addresses from the Satellite database randomly. The primary source of data is the database, not DHCP reservations. This IPAM is safe to use with concurrent host creation as IP addresses are returned in random order, minimizing the chance of a conflict. EUI-64 : Extended Unique Identifier (EUI) 64bit IPv6 address generation, as per RFC2373, is obtained through the 48-bit MAC address. External IPAM : Delegates IPAM to an external system through Capsule feature. Satellite currently does not ship with any external IPAM implementations, but several plugins are in development. None : IP address for each host must be entered manually. Options DHCP, Internal DB and Random DB can lead to DHCP conflicts on subnets with records created externally. These subnets must be under exclusive Satellite control. For more information about adding a subnet, see Section 3.9, "Adding a subnet to Satellite Server" . DHCP Ranges You can define the same DHCP range in Satellite Server for both discovered and provisioned systems, but use a separate range for each service within the same subnet. 3.4. Satellite and DHCP options Satellite manages DHCP reservations through a DHCP Capsule. Satellite also sets the -server and filename DHCP options. The -server option The -server option provides the IP address of the TFTP server to boot from. This option is not set by default and must be set for each TFTP Capsule. You can use the satellite-installer command with the --foreman-proxy-tftp-servername option to set the TFTP server in the /etc/foreman-proxy/settings.d/tftp.yml file: Each TFTP Capsule then reports this setting through the API and Satellite can retrieve the configuration information when it creates the DHCP record. When the PXE loader is set to none , Satellite does not populate the -server option into the DHCP record. If the -server option remains undefined, Satellite calls the Capsule API to retrieve the server name as specified by the --foreman-proxy-tftp-servername argument in a satellite-installer run. If the Capsule API call does not return a server name, Satellite uses the hostname of the Capsule. The filename option The filename option contains the full path to the file that downloads and executes during provisioning. The PXE loader that you select for the host or host group defines which filename option to use. When the PXE loader is set to none , Satellite does not populate the filename option into the DHCP record. Depending on the PXE loader option, the filename changes as follows: PXE loader option filename entry Notes PXELinux BIOS pxelinux.0 PXELinux UEFI pxelinux.efi iPXE Chain BIOS undionly.kpxe PXEGrub2 UEFI grub2/grubx64.efi x64 can differ depending on architecture iPXE UEFI HTTP http:// capsule.example.com :8000/httpboot/ipxe-x64.efi Requires the httpboot feature and renders the filename as a full URL where capsule.example.com is a known host name of Capsule in Satellite. Grub2 UEFI HTTP http:// capsule.example.com :8000/httpboot/grub2/grubx64.efi Requires the httpboot feature and renders the filename as a full URL where capsule.example.com is a known host name of Capsule in Satellite. 3.5. Troubleshooting DHCP problems in Satellite Satellite can manage an ISC DHCP server on internal or external DHCP Capsule. Satellite can list, create, and delete DHCP reservations and leases. However, there are a number of problems that you might encounter on occasions. Out of sync DHCP records When an error occurs during DHCP orchestration, DHCP records in the Satellite database and the DHCP server might not match. To fix this, you must add missing DHCP records from the Satellite database to the DHCP server and then remove unwanted records from the DHCP server as per the following steps: Procedure To preview the DHCP records that are going to be added to the DHCP server, enter the following command: If you are satisfied by the preview changes in the step, apply them by entering the above command with the perform=1 argument: To keep DHCP records in Satellite and in the DHCP server synchronized, you can remove unwanted DHCP records from the DHCP server. Note that Satellite assumes that all managed DHCP servers do not contain third-party records, therefore, this step might delete those unexpected records. To preview what records are going to be removed from the DHCP server, enter the following command: If you are satisfied by the preview changes in the step, apply them by entering the above command with the perform=1 argument: PXE loader option change When the PXE loader option is changed for an existing host, this causes a DHCP conflict. The only workaround is to overwrite the DHCP entry. Incorrect permissions on DHCP files An operating system update can update the dhcpd package. This causes the permissions of important directories and files to reset so that the DHCP Capsule cannot read the required information. For more information, see DHCP error while provisioning host from Satellite server Error ERF12-6899 ProxyAPI::ProxyException: Unable to set DHCP entry RestClient::ResourceNotFound 404 Resource Not Found on Red Hat Knowledgebase. Changing the DHCP Capsule entry Satellite manages DHCP records only for hosts that are assigned to subnets with a DHCP Capsule set. If you create a host and then clear or change the DHCP Capsule, when you attempt to delete the host, the action fails. If you create a host without setting the DHCP Capsule and then try to set the DHCP Capsule, this causes DHCP conflicts. Deleted hosts entries in the dhcpd.leases file Any changes to a DHCP lease are appended to the end of the dhcpd.leases file. Because entries are appended to the file, it is possible that two or more entries of the same lease can exist in the dhcpd.leases file at the same time. When there are two or more entries of the same lease, the last entry in the file takes precedence. Group, subgroup and host declarations in the lease file are processed in the same manner. If a lease is deleted, { deleted; } is appended to the declaration. 3.6. Prerequisites for image-based provisioning Post-boot configuration method Images that use the finish post-boot configuration scripts require a managed DHCP server, such as Satellite's integrated Capsule or an external Capsule. The host must be created with a subnet associated with a DHCP Capsule, and the IP address of the host must be a valid IP address from the DHCP range. It is possible to use an external DHCP service, but IP addresses must be entered manually. The SSH credentials corresponding to the configuration in the image must be configured in Satellite to enable the post-boot configuration to be made. Check the following items when troubleshooting a virtual machine booted from an image that depends on post-configuration scripts: The host has a subnet assigned in Satellite Server. The subnet has a DHCP Capsule assigned in Satellite Server. The host has a valid IP address assigned in Satellite Server. The IP address acquired by the virtual machine by using DHCP matches the address configured in Satellite Server. The virtual machine created from an image responds to SSH requests. The virtual machine created from an image authorizes the user and password, over SSH, which is associated with the image being deployed. Satellite Server has access to the virtual machine via SSH keys. This is required for the virtual machine to receive post-configuration scripts from Satellite Server. Pre-boot initialization configuration method Images that use the cloud-init scripts require a DHCP server to avoid having to include the IP address in the image. A managed DHCP Capsule is preferred. The image must have the cloud-init service configured to start when the system boots and fetch a script or configuration data to use in completing the configuration. Check the following items when troubleshooting a virtual machine booted from an image that depends on initialization scripts included in the image: There is a DHCP server on the subnet. The virtual machine has the cloud-init service installed and enabled. For information about the differing levels of support for finish and cloud-init scripts in virtual-machine images, see the Red Hat Knowledgebase Solution What are the supported compute resources for the finish and cloud-init scripts on the Red Hat Customer Portal. 3.7. Configuring network services Some provisioning methods use Capsule Server services. For example, a network might require Capsule Server to act as a DHCP server. A network can also use PXE boot services to install the operating system on new hosts. This requires configuring Capsule Server to use the main PXE boot services: DHCP, DNS, and TFTP. Use the satellite-installer command with the options to configure these services on Satellite Server. To configure these services on an external Capsule Server, run satellite-installer . Procedure Enter the satellite-installer command to configure the required network services: Find Capsule Server that you configure: Refresh features of Capsule Server to view the changes: Verify the services configured on Capsule Server: 3.7.1. Multiple subnets or domains using installer The satellite-installer options allow only for a single DHCP subnet or DNS domain. One way to define more than one subnet is by using a custom configuration file. For every additional subnet or domain, create an entry in /etc/foreman-installer/custom-hiera.yaml file: Execute satellite-installer to perform the changes and verify that the /etc/dhcp/dhcpd.conf contains appropriate entries. Subnets must be then defined in Satellite database. 3.7.2. DHCP options for network configuration --foreman-proxy-dhcp Enables the DHCP service. You can set this option to true or false . --foreman-proxy-dhcp-managed Enables Foreman to manage the DHCP service. You can set this option to true or false . --foreman-proxy-dhcp-gateway The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network. --foreman-proxy-dhcp-interface Sets the interface for the DHCP service to listen for requests. Set this to eth1 . --foreman-proxy-dhcp-nameservers Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for Satellite Server on eth1 . --foreman-proxy-dhcp-range A space-separated DHCP pool range for Discovered and Unmanaged services. --foreman-proxy-dhcp-server Sets the address of the DHCP server to manage. Run satellite-installer --help to view more options related to DHCP and other Capsule services. 3.7.3. DNS options for network configuration --foreman-proxy-dns Enables the DNS feature. You can set this option to true or false . --foreman-proxy-dns-provider Selects the provider to be used. --foreman-proxy-dns-managed Let the installer manage ISC BIND. This is only relevant when using the nsupdate and nsupdate_gss providers. You can set this option to true or false . --foreman-proxy-dns-forwarders Sets the DNS forwarders. Only used when ISC BIND is managed by the installer. Set this to your DNS recursors. --foreman-proxy-dns-interface Sets the interface to listen for DNS requests. Only used when ISC BIND is managed by the installer. Set this to eth1 . --foreman-proxy-dns-reverse The DNS reverse zone name. Only used when ISC BIND is managed by the installer. --foreman-proxy-dns-server Sets the address of the DNS server. Only used by the nsupdate , nsupdate_gss , and infoblox providers. --foreman-proxy-dns-zone Sets the DNS zone name. Only used when ISC BIND is managed by the installer. Run satellite-installer --help to view more options related to DNS and other Capsule services. 3.7.4. TFTP options for network configuration --foreman-proxy-tftp Enables TFTP service. You can set this option to true or false . --foreman-proxy-tftp-managed Enables Foreman to manage the TFTP service. You can set this option to true or false . --foreman-proxy-tftp-servername Sets the TFTP server to use. Ensure that you use Capsule's IP address. Run satellite-installer --help to view more options related to TFTP and other Capsule services. 3.7.5. Using TFTP services through NAT You can use Satellite TFTP services through NAT. To do this, on all NAT routers or firewalls, you must enable a TFTP service on UDP port 69 and enable the TFTP state tracking feature. For more information, see the documentation for your NAT device. Using NAT on Red Hat Enterprise Linux 7: Allow the TFTP service in the firewall configuration: Make the changes persistent: Using NAT on Red Hat Enterprise Linux 6: Configure the firewall to allow TFTP service UDP on port 69: Load the ip_conntrack_tftp kernel TFTP state module. In the /etc/sysconfig/iptables-config file, locate IPTABLES_MODULES and add ip_conntrack_tftp as follows: 3.8. Adding a domain to Satellite Server Satellite Server defines domain names for each host on the network. Satellite Server must have information about the domain and Capsule Server responsible for domain name assignment. Checking for existing domains Satellite Server might already have the relevant domain created as part of Satellite Server installation. Switch the context to Any Organization and Any Location then check the domain list to see if it exists. DNS server configuration considerations During the DNS record creation, Satellite performs conflict DNS lookups to verify that the host name is not in active use. This check runs against one of the following DNS servers: The system-wide resolver if Administer > Settings > Query local nameservers is set to true . The nameservers that are defined in the subnet associated with the host. The authoritative NS-Records that are queried from the SOA from the domain name associated with the host. If you experience timeouts during DNS conflict resolution, check the following settings: The subnet nameservers must be reachable from Satellite Server. The domain name must have a Start of Authority (SOA) record available from Satellite Server. The system resolver in the /etc/resolv.conf file must have a valid and working configuration. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Domains and click Create Domain . In the DNS Domain field, enter the full DNS domain name. In the Fullname field, enter the plain text name of the domain. Click the Parameters tab and configure any domain level parameters to apply to hosts attached to this domain. For example, user defined Boolean or string parameters to use in templates. Click Add Parameter and fill in the Name and Value fields. Click the Locations tab, and add the location where the domain resides. Click the Organizations tab, and add the organization that the domain belongs to. Click Submit to save the changes. CLI procedure Use the hammer domain create command to create a domain: In this example, the --dns-id option uses 1 , which is the ID of your integrated Capsule on Satellite Server. 3.9. Adding a subnet to Satellite Server You must add information for each of your subnets to Satellite Server because Satellite configures interfaces for new hosts. To configure interfaces, Satellite Server must have all the information about the network that connects these interfaces. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Subnets , and in the Subnets window, click Create Subnet . In the Name field, enter a name for the subnet. In the Description field, enter a description for the subnet. In the Network address field, enter the network address for the subnet. In the Network prefix field, enter the network prefix for the subnet. In the Network mask field, enter the network mask for the subnet. In the Gateway address field, enter the external gateway for the subnet. In the Primary DNS server field, enter a primary DNS for the subnet. In the Secondary DNS server , enter a secondary DNS for the subnet. From the IPAM list, select the method that you want to use for IP address management (IPAM). For more information about IPAM, see Chapter 3, Configuring networking . Enter the information for the IPAM method that you select. Click the Remote Execution tab and select the Capsule that controls the remote execution. Click the Domains tab and select the domains that apply to this subnet. Click the Capsules tab and select the Capsule that applies to each service in the subnet, including DHCP, TFTP, and reverse DNS services. Click the Parameters tab and configure any subnet level parameters to apply to hosts attached to this subnet. For example, user defined Boolean or string parameters to use in templates. Click the Locations tab and select the locations that use this Capsule. Click the Organizations tab and select the organizations that use this Capsule. Click Submit to save the subnet information. CLI procedure Create the subnet with the following command: Note In this example, the --dhcp-id , --dns-id , and --tftp-id options use 1, which is the ID of the integrated Capsule in Satellite Server. | [
"foreman-rake facts:clean",
"foreman-rake interfaces:clean",
"satellite-installer --foreman-proxy-tftp-servername 1.2.3.4",
"foreman-rake orchestration:dhcp:add_missing subnet_name=NAME",
"foreman-rake orchestration:dhcp:add_missing subnet_name=NAME perform=1",
"foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME",
"foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME perform=1",
"satellite-installer --foreman-proxy-dhcp true --foreman-proxy-dhcp-gateway \" 192.168.140.1 \" --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-nameservers \" 192.168.140.2 \" --foreman-proxy-dhcp-range \" 192.168.140.10 192.168.140.110 \" --foreman-proxy-dhcp-server \" 192.168.140.2 \" --foreman-proxy-dns true --foreman-proxy-dns-forwarders \" 8.8.8.8 \" --foreman-proxy-dns-forwarders \" 8.8.4.4 \" --foreman-proxy-dns-managed true --foreman-proxy-dns-reverse \" 140.168.192.in-addr.arpa \" --foreman-proxy-dns-server \" 127.0.0.1 \" --foreman-proxy-dns-zone \" example.com \" --foreman-proxy-tftp true --foreman-proxy-tftp-managed true",
"hammer capsule list",
"hammer capsule refresh-features --name \" satellite.example.com \"",
"hammer capsule info --name \" satellite.example.com \"",
"dhcp::pools: isolated.lan: network: 192.168.99.0 mask: 255.255.255.0 gateway: 192.168.99.1 range: 192.168.99.5 192.168.99.49 dns::zones: # creates @ SOA USD::fqdn root.example.com. # creates USD::fqdn A USD::ipaddress example.com: {} # creates @ SOA test.example.net. hostmaster.example.com. # creates test.example.net A 192.0.2.100 example.net: soa: test.example.net soaip: 192.0.2.100 contact: hostmaster.example.com. # creates @ SOA USD::fqdn root.example.org. # does NOT create an A record example.org: reverse: true # creates @ SOA USD::fqdn hostmaster.example.com. 2.0.192.in-addr.arpa: reverse: true contact: hostmaster.example.com.",
"firewall-cmd --add-service=tftp",
"firewall-cmd --runtime-to-permanent",
"iptables --sport 69 --state ESTABLISHED -A OUTPUT -i eth0 -j ACCEPT -m state -p udp service iptables save",
"IPTABLES_MODULES=\"ip_conntrack_tftp\"",
"hammer domain create --description \" My_Domain \" --dns-id My_DNS_ID --locations \" My_Location \" --name \" my-domain.tld \" --organizations \" My_Organization \"",
"hammer subnet create --boot-mode \"DHCP\" --description \" My_Description \" --dhcp-id My_DHCP_ID --dns-id My_DNS_ID --dns-primary \"192.168.140.2\" --dns-secondary \"8.8.8.8\" --domains \" my-domain.tld\" \\ --from \"192.168.140.111\" \\ --gateway \"192.168.140.1\" \\ --ipam \"DHCP\" \\ --locations \"_My_Location \" --mask \"255.255.255.0\" --name \" My_Network \" --network \"192.168.140.0\" --organizations \" My_Organization \" --tftp-id My_TFTP_ID --to \"192.168.140.250\""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/configuring_networking_provisioning |
Chapter 28. Defining Password Policies | Chapter 28. Defining Password Policies This chapter describes what password policies in Identity Management (IdM) are and how to manage them. 28.1. What Are Password Policies and Why Are They Useful A password policy is a set of rules that passwords must meet. For example, a password policy can define minimum password length and maximum password lifetime. All users affected by such a policy are required to set a sufficiently long password and change it frequently enough. Password policies help reduce the risk of someone discovering and misusing a user's password. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies |
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] | Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. status object HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. 4.1.1. .spec Description HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. Type object Required scaleTargetRef maxReplicas Property Type Description behavior object HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). maxReplicas integer maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas. metrics array metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. metrics[] object MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). minReplicas integer minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available. scaleTargetRef object CrossVersionObjectReference contains enough information to let you identify the referred resource. 4.1.2. .spec.behavior Description HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). Type object Property Type Description scaleDown object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. scaleUp object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. 4.1.3. .spec.behavior.scaleDown Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer StabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.4. .spec.behavior.scaleDown.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.5. .spec.behavior.scaleDown.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer PeriodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string Type is used to specify the scaling policy. value integer Value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.6. .spec.behavior.scaleUp Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer StabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.7. .spec.behavior.scaleUp.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.8. .spec.behavior.scaleUp.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer PeriodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string Type is used to specify the scaling policy. value integer Value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.9. .spec.metrics Description metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. Type array 4.1.10. .spec.metrics[] Description MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). Type object Required type Property Type Description containerResource object ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. external object ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). object object ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. resource object ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. type string type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.11. .spec.metrics[].containerResource Description ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target container Property Type Description container string container is the name of the container in the pods of the scaling target name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.12. .spec.metrics[].containerResource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.13. .spec.metrics[].external Description ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.14. .spec.metrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.15. .spec.metrics[].external.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.16. .spec.metrics[].object Description ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required describedObject target metric Property Type Description describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.17. .spec.metrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.18. .spec.metrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.19. .spec.metrics[].object.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.20. .spec.metrics[].pods Description PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.21. .spec.metrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.22. .spec.metrics[].pods.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.23. .spec.metrics[].resource Description ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target Property Type Description name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.24. .spec.metrics[].resource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.25. .spec.scaleTargetRef Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.26. .status Description HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. Type object Required desiredReplicas Property Type Description conditions array conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. conditions[] object HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. currentMetrics array currentMetrics is the last read state of the metrics used by this autoscaler. currentMetrics[] object MetricStatus describes the last-read state of a single metric. currentReplicas integer currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler. desiredReplicas integer desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler. lastScaleTime Time lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed. observedGeneration integer observedGeneration is the most recent generation observed by this autoscaler. 4.1.27. .status.conditions Description conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. Type array 4.1.28. .status.conditions[] Description HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another message string message is a human-readable explanation containing details about the transition reason string reason is the reason for the condition's last transition. status string status is the status of the condition (True, False, Unknown) type string type describes the current condition 4.1.29. .status.currentMetrics Description currentMetrics is the last read state of the metrics used by this autoscaler. Type array 4.1.30. .status.currentMetrics[] Description MetricStatus describes the last-read state of a single metric. Type object Required type Property Type Description containerResource object ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. external object ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. object object ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). resource object ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. type string type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.31. .status.currentMetrics[].containerResource Description ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current container Property Type Description container string Container is the name of the container in the pods of the scaling target current object MetricValueStatus holds the current value for a metric name string Name is the name of the resource in question. 4.1.32. .status.currentMetrics[].containerResource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.33. .status.currentMetrics[].external Description ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.34. .status.currentMetrics[].external.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.35. .status.currentMetrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.36. .status.currentMetrics[].object Description ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required metric current describedObject Property Type Description current object MetricValueStatus holds the current value for a metric describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.37. .status.currentMetrics[].object.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.38. .status.currentMetrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.39. .status.currentMetrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.40. .status.currentMetrics[].pods Description PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.41. .status.currentMetrics[].pods.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.42. .status.currentMetrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.43. .status.currentMetrics[].resource Description ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current Property Type Description current object MetricValueStatus holds the current value for a metric name string Name is the name of the resource in question. 4.1.44. .status.currentMetrics[].resource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.2. API endpoints The following API endpoints are available: /apis/autoscaling/v2/horizontalpodautoscalers GET : list or watch objects of kind HorizontalPodAutoscaler /apis/autoscaling/v2/watch/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers DELETE : delete collection of HorizontalPodAutoscaler GET : list or watch objects of kind HorizontalPodAutoscaler POST : create a HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} DELETE : delete a HorizontalPodAutoscaler GET : read the specified HorizontalPodAutoscaler PATCH : partially update the specified HorizontalPodAutoscaler PUT : replace the specified HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} GET : watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status GET : read status of the specified HorizontalPodAutoscaler PATCH : partially update status of the specified HorizontalPodAutoscaler PUT : replace status of the specified HorizontalPodAutoscaler 4.2.1. /apis/autoscaling/v2/horizontalpodautoscalers Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.2. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty 4.2.2. /apis/autoscaling/v2/watch/horizontalpodautoscalers Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HorizontalPodAutoscaler Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a HorizontalPodAutoscaler Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 202 - Accepted HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.4. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a HorizontalPodAutoscaler Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HorizontalPodAutoscaler Table 4.23. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HorizontalPodAutoscaler Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HorizontalPodAutoscaler Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.6. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.7. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status Table 4.33. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HorizontalPodAutoscaler Table 4.35. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HorizontalPodAutoscaler Table 4.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.37. Body parameters Parameter Type Description body Patch schema Table 4.38. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HorizontalPodAutoscaler Table 4.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.40. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.41. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/autoscale_apis/horizontalpodautoscaler-autoscaling-v2 |
probe::netfilter.arp.in | probe::netfilter.arp.in Name probe::netfilter.arp.in - - Called for each incoming ARP packet Synopsis netfilter.arp.in Values ar_hln Length of hardware address nf_stop Constant used to signify a 'stop' verdict nf_accept Constant used to signify an 'accept' verdict ar_tha Ethernet+IP only (ar_pro==0x800): target hardware (MAC) address ar_data Address of ARP packet data region (after the header) outdev_name Name of network device packet will be routed to (if known) outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict arphdr Address of ARP header indev_name Name of network device packet was received on (if known) nf_stolen Constant used to signify a 'stolen' verdict length The length of the packet buffer contents, in bytes ar_pln Length of protocol address ar_sha Ethernet+IP only (ar_pro==0x800): source hardware (MAC) address pf Protocol family -- always " arp " nf_drop Constant used to signify a 'drop' verdict ar_pro Format of protocol address ar_sip Ethernet+IP only (ar_pro==0x800): source IP address indev Address of net_device representing input device, 0 if unknown ar_tip Ethernet+IP only (ar_pro==0x800): target IP address ar_hrd Format of hardware address ar_op ARP opcode (command) nf_queue Constant used to signify a 'queue' verdict | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netfilter-arp-in |
Chapter 3. Context Functions | Chapter 3. Context Functions The context functions provide additional information about where an event occurred. These functions can provide information such as a backtrace to where the event occurred and the current register values for the processor. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/context_stp |
Chapter 5. Topics | Chapter 5. Topics Messages in Kafka are always sent to or received from a topic. This chapter describes how to configure and manage Kafka topics. 5.1. Partitions and replicas Messages in Kafka are always sent to or received from a topic. A topic is always split into one or more partitions. Partitions act as shards. That means that every message sent by a producer is always written only into a single partition. Thanks to the sharding of messages into different partitions, topics are easy to scale horizontally. Each partition can have one or more replicas, which will be stored on different brokers in the cluster. When creating a topic you can configure the number of replicas using the replication factor . Replication factor defines the number of copies which will be held within the cluster. One of the replicas for given partition will be elected as a leader. The leader replica will be used by the producers to send new messages and by the consumers to consume messages. The other replicas will be follower replicas. The followers replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so the load is well balanced within the cluster. Note The replication factor determines the number of replicas including the leader and the followers. For example, if you set the replication factor to 3 , then there will one leader and two follower replicas. 5.2. Message retention The message retention policy defines how long the messages will be stored on the Kafka brokers. It can be defined based on time, partition size or both. For example, you can define that the messages should be kept: For 7 days Until the parition has 1GB of messages. Once the limit is reached, the oldest messages will be removed. For 7 days or until the 1GB limit has been reached. Whatever limit comes first will be used. Warning Kafka brokers store messages in log segments. The messages which are past their retention policy will be deleted only when a new log segment is created. New log segments are created when the log segment exceeds the configured log segment size. Additionally, users can request new segments to be created periodically. Additionally, Kafka brokers support a compacting policy. For a topic with the compacted policy, the broker will always keep only the last message for each key. The older messages with the same key will be removed from the partition. Because compacting is a periodically executed action, it does not happen immediately when the new message with the same key are sent to the partition. Instead it might take some time until the older messages are removed. For more information about the message retention configuration options, see Section 5.5, "Topic configuration" . 5.3. Topic auto-creation When a producer or consumer tries to send messages to or receive messages from a topic that does not exist, Kafka will, by default, automatically create that topic. This behavior is controlled by the auto.create.topics.enable configuration property which is set to true by default. To disable it, set auto.create.topics.enable to false in the Kafka broker configuration file: 5.4. Topic deletion Kafka offers the possibility to disable deletion of topics. This is configured through the delete.topic.enable property, which is set to true by default (that is, deleting topics is possible). When this property is set to false it will be not possible to delete topics and all attempts to delete topic will return success but the topic will not be deleted. 5.5. Topic configuration Auto-created topics will use the default topic configuration which can be specified in the broker properties file. However, when creating topics manually, their configuration can be specified at creation time. It is also possible to change a topic's configuration after it has been created. The main topic configuration options for manually created topics are: cleanup.policy Configures the retention policy to delete or compact . The delete policy will delete old records. The compact policy will enable log compaction. The default value is delete . For more information about log compaction, see Kafka website . compression.type Specifies the compression which is used for stored messages. Valid values are gzip , snappy , lz4 , uncompressed (no compression) and producer (retain the compression codec used by the producer). The default value is producer . max.message.bytes The maximum size of a batch of messages allowed by the Kafka broker, in bytes. The default value is 1000012 . min.insync.replicas The minimum number of replicas which must be in sync for a write to be considered successful. The default value is 1 . retention.ms Maximum number of milliseconds for which log segments will be retained. Log segments older than this value will be deleted. The default value is 604800000 (7 days). retention.bytes The maximum number of bytes a partition will retain. Once the partition size grows over this limit, the oldest log segments will be deleted. Value of -1 indicates no limit. The default value is -1 . segment.bytes The maximum file size of a single commit log segment file in bytes. When the segment reaches its size, a new segment will be started. The default value is 1073741824 bytes (1 gibibyte). For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . The defaults for auto-created topics can be specified in the Kafka broker configuration using similar options: log.cleanup.policy See cleanup.policy above. compression.type See compression.type above. message.max.bytes See max.message.bytes above. min.insync.replicas See min.insync.replicas above. log.retention.ms See retention.ms above. log.retention.bytes See retention.bytes above. log.segment.bytes See segment.bytes above. default.replication.factor Default replication factor for automatically created topics. Default value is 1 . num.partitions Default number of partitions for automatically created topics. Default value is 1 . For list of all supported Kafka broker configuration options, see Appendix A, Broker configuration parameters . 5.6. Internal topics Internal topics are created and used internally by the Kafka brokers and clients. Kafka has several internal topics. These are used to store consumer offsets ( __consumer_offsets ) or transaction state ( __transaction_state ). These topics can be configured using dedicated Kafka broker configuration options starting with prefix offsets.topic. and transaction.state.log. . The most important configuration options are: offsets.topic.replication.factor Number of replicas for __consumer_offsets topic. The default value is 3 . offsets.topic.num.partitions Number of partitions for __consumer_offsets topic. The default value is 50 . transaction.state.log.replication.factor Number of replicas for __transaction_state topic. The default value is 3 . transaction.state.log.num.partitions Number of partitions for __transaction_state topic. The default value is 50 . transaction.state.log.min.isr Minimum number of replicas that must acknowledge a write to __transaction_state topic to be considered successful. If this minimum cannot be met, then the producer will fail with an exception. The default value is 2 . 5.7. Creating a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Creating a topic Create a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. The new topic to be created in the --create option. Topic name in the --topic option. The number of partitions in the --partitions option. Topic replication factor in the --replication-factor option. You can also override some of the default topic configuration options using the option --config . This option can be used multiple times to override different options. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2> Example of the command to create a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2 Verify that the topic exists using kafka-topics.sh . bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName> Example of the command to describe a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . 5.8. Listing and describing topics The kafka-topics.sh tool can be used to list and describe topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Describing a topic Describe a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. Use the --describe option to specify that you want to describe a topic. Topic name must be specified in the --topic option. When the --topic option is omitted, it will describe all available topics. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName> Example of the command to describe a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic The describe command will list all partitions and replicas which belong to this topic. It will also list all topic configuration options. Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For more information about creating topics, see Section 5.7, "Creating a topic" . 5.9. Modifying a topic configuration The kafka-configs.sh tool can be used to modify topic configurations. kafka-configs.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Modify topic configuration Use the kafka-configs.sh tool to get the current configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --describe option to get the current configuration. bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --describe Example of the command to get configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe Use the kafka-configs.sh tool to change the configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --alter option to modify the current configuration. Specify the options you want to add or change in the option --add-config . bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value> Example of the command to change configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1 Use the kafka-configs.sh tool to delete an existing configuration option. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --delete-config option to remove existing configuration option. Specify the options you want to remove in the option --remove-config . bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option> Example of the command to change configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For more information about creating topics, see Section 5.7, "Creating a topic" . For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . 5.10. Deleting a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Deleting a topic Delete a topic using the kafka-topics.sh utility. Host and port of the Kafka broker in the --bootstrap-server option. Use the --delete option to specify that an existing topic should be deleted. Topic name must be specified in the --topic option. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --delete --topic <TopicName> Example of the command to create a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic Verify that the topic was deleted using kafka-topics.sh . bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --list Example of the command to list all topics bin/kafka-topics.sh --bootstrap-server localhost:9092 --list Additional resources For more information about creating topics, see Section 5.7, "Creating a topic" . | [
"auto.create.topics.enable=false",
"delete.topic.enable=false",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --describe",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value>",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option>",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --delete --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --list",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/topics-str |
Chapter 36. Scanning iSCSI Targets with Multiple LUNs or Portals | Chapter 36. Scanning iSCSI Targets with Multiple LUNs or Portals With some device models (for example, from EMC and Netapp), however, a single target may have multiple logical units or portals. In this case, issue a sendtargets command to the host first to find new portals on the target. Then, rescan the existing sessions using: You can also rescan a specific session by specifying the session's SID value, as in: If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to find new portals for each target. Then, rescan existing sessions to discover new logical units on existing sessions (i.e. using the --rescan option). Important The sendtargets command used to retrieve --targetname and --portal values overwrites the contents of the /var/lib/iscsi/nodes database. This database will then be repopulated using the settings in /etc/iscsi/iscsid.conf . However, this will not occur if a session is currently logged in and in use. To safely add new targets/portals or delete old ones, use the -o new or -o delete options, respectively. For example, to add new targets/portals without overwriting /var/lib/iscsi/nodes , use the following command: To delete /var/lib/iscsi/nodes entries that the target did not display during discovery, use: You can also perform both tasks simultaneously, as in: The sendtargets command will yield the following output: Example 36.1. Output of the sendtargets command For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1 as your target_name , the output should appear similar to the following: Note that proper_target_name and ip:port,target_portal_group_tag are identical to the values of the same name in Section 27.2, "iSCSI Initiator Creation" . At this point, you now have the proper --targetname and --portal values needed to manually scan for iSCSI devices. To do so, run the following command: Example 36.2. Full iscsiadm command Using our example (where proper_target_name is equallogic-iscsi1 ), the full command would be: [8] For information on how to retrieve a session's SID value, refer to Section 27.2, "iSCSI Initiator Creation" . [9] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines - preceded by the backslash (\) - should be treated as one command, sans backslashes. | [
"iscsiadm -m session --rescan",
"iscsiadm -m session -r SID --rescan [8]",
"iscsiadm -m discovery -t st -p target_IP -o new",
"iscsiadm -m discovery -t st -p target_IP -o delete",
"iscsiadm -m discovery -t st -p target_IP -o delete -o new",
"ip:port,target_portal_group_tag proper_target_name",
"10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1",
"iscsiadm --mode node --targetname proper_target_name --portal ip:port,target_portal_group_tag \\ --login [9]",
"iscsiadm --mode node --targetname \\ iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \\ --portal 10.16.41.155:3260,0 --login [9]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iscsi-scanning-interconnects |
15.5. Methods | 15.5. Methods 15.5.1. Creating a Virtual Machine Creating a new virtual machine requires the name , template , and cluster elements. Identify the template and cluster elements with the id attribute or name element. Identify the CPU profile ID with the cpuprofiles attribute. Example 15.4. Creating a virtual machine with 512 MB that boots from CD-ROM Example 15.5. Creating a virtual machine with 512 MB that boots from a virtual hard disk Note Memory in the example is converted to bytes using the following formula: 512MB * 1024 2 = 536870912 bytes 15.5.2. Updating a Virtual Machine The name , description , cluster , type , memory , cpu , os , high_availability , display , timezone , domain , stateless , placement_policy , memory_policy , usb , payloads , origin and custom_properties elements are updatable post-creation. Example 15.6. Updating a virtual machine to contain 1 GB of memory Note Memory in the example is converted to bytes using the following formula: 1024MB * 1024 2 = 1073741824 bytes Note Memory hot plug is supported in Red Hat Virtualization. If the virtual machine's operating system supports memory hot plug, you can use the example above to increase memory while the virtual machine is running. Example 15.7. Hot plugging vCPUs Add virtual CPUs to a running virtual machine without having to reboot it. In this example, the number of sockets is changed to 2. Note CPU hot unplug is currently not supported in Red Hat Virtualization. Example 15.8. Pinning a virtual machine to multiple hosts A virtual machine that is pinned to multiple hosts cannot be live migrated, but in the event of a host failure, any virtual machine configured to be highly available is automatically restarted on one of the other hosts to which the virtual machine is pinned. Multi-host pinning can be used to restrict a virtual machine to hosts with, for example, the same hardware configuration. 15.5.3. Removing a Virtual Machine Removal of a virtual machine requires a DELETE request. Example 15.9. Removing a virtual machine 15.5.4. Removing a Virtual Machine but not the Virtual Disk Detach the virtual disk prior to removing the virtual machine. This preserves the virtual disk. Removal of a virtual machine requires a DELETE request. Example 15.10. Removing a virtual machine | [
"POST /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <name>vm2</name> <description>Virtual Machine 2</description> <type>desktop</type> <memory>536870912</memory> <cluster> <name>default</name> </cluster> <template> <name>Blank</name> </template> <os> <boot dev=\"cdrom\"/> </os> <cdroms> <cdrom> <file id=\"example_windows_7_x64_dvd_u_677543.iso\"/> </cdrom> </cdroms> <cpu_profile id=\"0000001a-001a-001a-001a-00000000035e\"/> </vm>",
"POST /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <name>vm2</name> <description>Virtual Machine 2</description> <type>desktop</type> <memory>536870912</memory> <cluster> <name>default</name> </cluster> <template> <name>Blank</name> </template> <os> <boot dev=\"hd\"/> </os> <cpu_profile id=\"0000001a-001a-001a-001a-00000000035e\"/> </vm>",
"PUT /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <memory>1073741824</memory> </vm>",
"PUT /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <cpu> <topology sockets=\"2\" cores=\"1\"/> </cpu> </vm>",
"PUT /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <placement_policy> <hosts> <host><name>Host1</name></host> <host><name>Host2</name></host> </hosts> <affinity>pinned</affinity> </placement_policy> </vm>",
"DELETE /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 HTTP/1.1 204 No Content",
"DELETE /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <vm> <disks> <detach_only>true</detach_only> </disks> </vm> </action>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-methods7 |
4.59. fence-virt | 4.59. fence-virt 4.59.1. RHBA-2011:1566 - fence-virt bug fix and enhancement update Updated fence-virt packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The fence-virt packages provide a fencing agent for virtual machines as well as a host agent which processes fencing requests. Bug Fixes BZ# 719645 Prior to this update, the domain parameter was missing from the metadata. As a consequence, existing configurations utilizing the domain parameter did not function correctly when fencing. This update adds the domain parameter for compatibility. Now, existing configurations work as expected. BZ# 720767 Prior to this update, hash mismatches falsely returned successes for fencing. As a consequence, data corruption could occur in live-hang scenarios. This update corrects the hash handling of mismatches. Now, no more false successes are returned and the data integrity is preserved. Enhancement BZ# 691200 With this update, the libvirt-qpid plugin now operates using QMF version 2. All users of fence-virt are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. 4.59.2. RHBA-2012:0485 - fence-virt bug fix update Updated fence-virt packages that fix one bug are now available for Red Hat Enterprise Linux 6. The fence-virt packages provide a fencing agent for virtual machines as well as a host agent which processes fencing requests. Bug Fix BZ# 807270 Previously, the libvirt-qpid plug-in was linked directly against Qpid libraries instead of being linked only against QMFv2 libraries. As a consequence, newer versions of Qpid libraries could not be used with the libvirt-qpid plug-in. This update modifies the appropriate makefile so that libvirt-qpid is no longer linked directly against the Qpid libraries. The libvirt-qpid plug-in does not have to be re-linked to work with the newer Qpid libraries. All users of fence-virt are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/fence-virt |
Chapter 62. Salesforce Delete Sink | Chapter 62. Salesforce Delete Sink Removes an object from Salesforce. The body received must be a JSON containing two keys: sObjectId and sObjectName. Example body: { "sObjectId": "XXXXX0", "sObjectName": "Contact" } 62.1. Configuration Options The following table summarizes the configuration options available for the salesforce-delete-sink Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" Note Fields marked with an asterisk (*) are mandatory. 62.2. Dependencies At runtime, the salesforce-delete-sink Kamelet relies upon the presence of the following dependencies: camel:salesforce camel:kamelet camel:core camel:jsonpath 62.3. Usage This section describes how you can use the salesforce-delete-sink . 62.3.1. Knative Sink You can use the salesforce-delete-sink Kamelet as a Knative sink by binding it to a Knative object. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.1.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.3.2. Kafka Sink You can use the salesforce-delete-sink Kamelet as a Kafka sink by binding it to a Kafka topic. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.2.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-delete-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind channel:mychannel salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/salesforce-sink-delete |
Chapter 17. Network [config.openshift.io/v1] | Chapter 17. Network [config.openshift.io/v1] Description Network holds cluster-wide information about Network. The canonical name is cluster . It is used to configure the desired network configuration, such as: IP address pools for services/pod IPs, network plugin, etc. Please view network.spec for an explanation on what applies when configuring this resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. status object status holds observed values from the cluster. They may not be overridden. 17.1.1. .spec Description spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. This field is immutable after installation. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. externalIP object externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. networkType string NetworkType is the plugin that is to be deployed (e.g. OpenShiftSDN). This should match a value that the cluster-network-operator understands, or else no networking will be installed. Currently supported values are: - OpenShiftSDN This field is immutable after installation. serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. This field is immutable after installation. serviceNodePortRange string The port range allowed for Services of type NodePort. If not specified, the default of 30000-32767 will be used. Such Services without a NodePort specified will have one automatically allocated from this range. This parameter can be updated after the cluster is installed. 17.1.2. .spec.clusterNetwork Description IP address pool to use for pod IPs. This field is immutable after installation. Type array 17.1.3. .spec.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.4. .spec.externalIP Description externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. Type object Property Type Description autoAssignCIDRs array (string) autoAssignCIDRs is a list of CIDRs from which to automatically assign Service.ExternalIP. These are assigned when the service is of type LoadBalancer. In general, this is only useful for bare-metal clusters. In Openshift 3.x, this was misleadingly called "IngressIPs". Automatically assigned External IPs are not affected by any ExternalIPPolicy rules. Currently, only one entry may be provided. policy object policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. 17.1.5. .spec.externalIP.policy Description policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. Type object Property Type Description allowedCIDRs array (string) allowedCIDRs is the list of allowed CIDRs. rejectedCIDRs array (string) rejectedCIDRs is the list of disallowed CIDRs. These take precedence over allowedCIDRs. 17.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. clusterNetworkMTU integer ClusterNetworkMTU is the MTU for inter-pod networking. conditions array conditions represents the observations of a network.config current state. Known .status.conditions.type are: "NetworkTypeMigrationInProgress", "NetworkTypeMigrationMTUReady", "NetworkTypeMigrationTargetCNIAvailable", "NetworkTypeMigrationTargetCNIInUse" and "NetworkTypeMigrationOriginalCNIPurged" conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } migration object Migration contains the cluster network migration configuration. networkType string NetworkType is the plugin that is deployed (e.g. OpenShiftSDN). serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. 17.1.7. .status.clusterNetwork Description IP address pool to use for pod IPs. Type array 17.1.8. .status.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.9. .status.conditions Description conditions represents the observations of a network.config current state. Known .status.conditions.type are: "NetworkTypeMigrationInProgress", "NetworkTypeMigrationMTUReady", "NetworkTypeMigrationTargetCNIAvailable", "NetworkTypeMigrationTargetCNIInUse" and "NetworkTypeMigrationOriginalCNIPurged" Type array 17.1.10. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 17.1.11. .status.migration Description Migration contains the cluster network migration configuration. Type object Property Type Description mtu object MTU contains the MTU migration configuration. networkType string NetworkType is the target plugin that is to be deployed. Currently supported values are: OpenShiftSDN, OVNKubernetes 17.1.12. .status.migration.mtu Description MTU contains the MTU migration configuration. Type object Property Type Description machine object Machine contains MTU migration configuration for the machine's uplink. network object Network contains MTU migration configuration for the default network. 17.1.13. .status.migration.mtu.machine Description Machine contains MTU migration configuration for the machine's uplink. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.1.14. .status.migration.mtu.network Description Network contains MTU migration configuration for the default network. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/config.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 17.2.1. /apis/config.openshift.io/v1/networks HTTP method DELETE Description delete collection of Network Table 17.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 17.2. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 17.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.4. Body parameters Parameter Type Description body Network schema Table 17.5. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 17.2.2. /apis/config.openshift.io/v1/networks/{name} Table 17.6. Global path parameters Parameter Type Description name string name of the Network HTTP method DELETE Description delete a Network Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 17.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body Network schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/network-config-openshift-io-v1 |
Chapter 5. Fixed issues | Chapter 5. Fixed issues For a complete list of issues that have been fixed in the release, see AMQ Broker 7.12.0 Fixed Issues and see AMQ Broker - 7.12.x Resolved Issues for a list of issues that have been fixed in patch releases. | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/release_notes_for_red_hat_amq_broker_7.12/resolved |
Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example | Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example As a business rules developer, you can use an IDE to build, run, and modify the optaweb-employee-rostering starter application that uses the Red Hat build of OptaPlanner functionality. Prerequisites You use an integrated development environment, such as Red Hat CodeReady Studio or IntelliJ IDEA. You have an understanding of the Java language. You have an understanding of React and TypeScript. This requirement is necessary to develop the OptaWeb UI. 14.1. Overview of the employee rostering starter application The employee rostering starter application assigns employees to shifts on various positions in an organization. For example, you can use the application to distribute shifts in a hospital between nurses, guard duty shifts across a number of locations, or shifts on an assembly line between workers. Optimal employee rostering must take a number of variables into account. For example, different skills can be required for shifts in different positions. Also, some employees might be unavailable for some time slots or might prefer a particular time slot. Moreover, an employee can have a contract that limits the number of hours that the employee can work in a single time period. The Red Hat build of OptaPlanner rules for this starter application use both hard and soft constraints. During an optimization, the planning engine may not violate hard constraints, for example, if an employee is unavailable (out sick), or that an employee cannot work two spots in a single shift. The planning engine tries to adhere to soft constraints, such as an employee's preference to not work a specific shift, but can violate them if the optimal solution requires it. 14.2. Building and running the employee rostering starter application You can build the employee rostering starter application from the source code and run it as a JAR file. Alternatively, you can use your IDE, for example, Eclipse (including Red Hat CodeReady Studio), to build and run the application. 14.2.1. Preparing deployment files You must download and prepare the deployment files before building and deploying the application. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Download Red Hat Decision Manager 7.13 Maven Repository Kogito and OptaPlanner 8 Maven Repository ( rhpam-7.13.5-kogito-maven-repository.zip ). Extract the rhpam-7.13.5-kogito-maven-repository.zip file. Copy the contents of the rhpam-7.13.5-kogito-maven-repository/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. This folder is the base folder in subsequent parts of this document. Note File and folder names might have higher version numbers than specifically noted in this document. 14.2.2. Running the Employee Rostering starter application JAR file You can run the Employee Rostering starter application from a JAR file included in the Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts download. Prerequisites You have downloaded and extracted the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure In a command terminal, change to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. Enter the following command: mvn clean install -DskipTests Wait for the build process to complete. Navigate to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering/optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java -jar quarkus-app/quarkus-run.jar Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres To access the application, enter http://localhost:8080/ in a web browser. 14.2.3. Building and running the Employee Rostering starter application using Maven You can use the command line to build and run the employee rostering starter application. If you use this procedure, the data is stored in memory and is lost when the server is stopped. To build and run the application with a database server for persistent storage, see Section 14.2.4, "Building and running the employee rostering starter application with persistent data storage from the command line" . Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Navigate to the optaweb-employee-rostering-backend directory. Enter the following command: mvn quarkus:dev Navigate to the optaweb-employee-rostering-frontend directory. Enter the following command: npm start Note If you use npm to start the server, npm monitors code changes. To access the application, enter http://localhost:3000/ in a web browser. 14.2.4. Building and running the employee rostering starter application with persistent data storage from the command line If you use the command line to build the employee rostering starter application and run it, you can provide a database server for persistent data storage. Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. You have a deployed MySQL or PostrgeSQL database server. Procedure In a command terminal, navigate to the optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java \ -Dquarkus.datasource.username=<DATABASE_USER> \ -Dquarkus.datasource.password=<DATABASE_PASSWORD> \ -Dquarkus.datasource.jdbc.url=<DATABASE_URL> \ -jar quarkus-app/quarkus-run.jar In this example, replace the following placeholders: <DATABASE_URL> : URL to connect to the database <DATABASE_USER> : The user to connect to the database <DATABASE_PASSWORD> : The password for <DATABASE_USER> Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres 14.2.5. Building and running the employee rostering starter application using IntelliJ IDEA You can use IntelliJ IDEA to build and run the employee rostering starter application. Prerequisites You have downloaded the Employee Rostering source code, available from the Employee Rostering GitHub page. IntelliJ IDEA, Maven, and Node.js are installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Start IntelliJ IDEA. From the IntelliJ IDEA main menu, select File Open . Select the root directory of the application source and click OK . From the main menu, select Run Edit Configurations . In the window that appears, expand Templates and select Maven . The Maven sidebar appears. In the Maven sidebar, select optaweb-employee-rostering-backend from the Working Directory menu. In Command Line , enter mvn quarkus:dev . To start the back end, click OK . In a command terminal, navigate to the optaweb-employee-rostering-frontend directory. Enter the following command to start the front end: To access the application, enter http://localhost:3000/ in a web browser. 14.3. Overview of the source code of the employee rostering starter application The employee rostering starter application consists of the following principal components: A backend that implements the rostering logic using Red Hat build of OptaPlanner and provides a REST API A frontend module that implements a user interface using React and interacts with the backend module through the REST API You can build and use these components independently. In particular, you can implement a different user interface and use the REST API to call the server. In addition to the two main components, the employee rostering template contains a generator of random source data (useful for demonstration and testing purposes) and a benchmarking application. Modules and key classes The Java source code of the employee rostering template contains several Maven modules. Each of these modules includes a separate Maven project file ( pom.xml ), but they are intended for building in a common project. The modules contain a number of files, including Java classes. This document lists all the modules, as well as the classes and other files that contain the key information for the employee rostering calculations. optaweb-employee-rostering-benchmark module: Contains an additional application that generates random data and benchmarks the solution. optaweb-employee-rostering-distribution module: Contains README files. optaweb-employee-rostering-docs module: Contains documentation files. optaweb-employee-rostering-frontend module: Contains the client application with the user interface, developed in React. optaweb-employee-rostering-backend module: Contains the server application that uses OptaPlanner to perform the rostering calculation. src/main/java/org.optaweb.employeerostering.service.roster/rosterGenerator.java : Generates random input data for demonstration and testing purposes. If you change the required input data, change the generator accordingly. src/main/java/org.optaweb.employeerostering.domain.employee/EmployeeAvailability.java : Defines availability information for an employee. For every time slot, an employee can be unavailable, available, or the time slot can be designated a preferred time slot for the employee. src/main/java/org.optaweb.employeerostering.domain.employee/Employee.java : Defines an employee. An employee has a name, a list of skills, and works under a contract. Skills are represented by skill objects. src/main/java/org.optaweb.employeerostering.domain.roster/Roster.java : Defines the calculated rostering information. src/main/java/org.optaweb.employeerostering.domain.shift/Shift.java : Defines a shift to which an employee can be assigned. A shift is defined by a time slot and a spot. For example, in a diner there could be a shift in the Kitchen spot for the February 20 8AM-4PM time slot. Multiple shifts can be defined for a specific spot and time slot. In this case, multiple employees are required for this spot and time slot. src/main/java/org.optaweb.employeerostering.domain.skill/Skill.java : Defines a skill that an employee can have. src/main/java/org.optaweb.employeerostering.domain.spot/Spot.java : Defines a spot where employees can be placed. For example, a Kitchen can be a spot. src/main/java/org.optaweb.employeerostering.domain.contract/Contract.java : Defines a contract that sets limits on work time for an employee in various time periods. src/main/java/org.optaweb.employeerostering.domain.tenant/Tenant.java : Defines a tenant. Each tenant represents an independent set of data. Changes in the data for one tenant do not affect any other tenants. *View.java : Classes related to domain objects that define value sets that are calculated from other information; the client application can read these values through the REST API, but not write them. *Service.java : Interfaces located in the service package that define the REST API. Both the server and the client application separately define implementations of these interfaces. optaweb-employee-rostering-standalone module: Contains the assembly configurations for the standalone application. 14.4. Modifying the employee rostering starter application To modify the employee rostering starter application to suit your needs, you must change the rules that govern the optimization process. You must also ensure that the data structures include the required data and provide the required calculations for the rules. If the required data is not present in the user interface, you must also modify the user interface. The following procedure outlines the general approach to modifying the employee rostering starter application. Prerequisites You have a build environment that successfully builds the application. You can read and modify Java code. Procedure Plan the required changes. Answer the following questions: What are the additional scenarios that must be avoided? These scenarios are hard constraints . What are the additional scenarios that the optimizer must try to avoid when possible? These scenarios are soft constraints . What data is required to calculate if each scenario is happening in a potential solution? Which of the data can be derived from the information that the user enters in the existing version? Which of the data can be hardcoded? Which of the data must be entered by the user and is not entered in the current version? If any required data can be calculated from the current data or can be hardcoded, add the calculations or hardcoding to existing view or utility classes. If the data must be calculated on the server side, add REST API endpoints to read it. If any required data must be entered by the user, add the data to the classes representing the data entities (for example, the Employee class), add REST API endpoints to read and write the data, and modify the user interface to enter the data. When all of the data is available, modify the rules. For most modifications, you must add a new rule. The rules are located in the src/main/java/org/optaweb/employeerostering/service/solver/EmployeeRosteringConstraintProvider.java file of the optaweb-employee-rostering-backend module. After modifying the application, build and run it. | [
"mvn clean install -DskipTests",
"java -jar quarkus-app/quarkus-run.jar",
"mvn quarkus:dev",
"npm start",
"java -Dquarkus.datasource.username=<DATABASE_USER> -Dquarkus.datasource.password=<DATABASE_PASSWORD> -Dquarkus.datasource.jdbc.url=<DATABASE_URL> -jar quarkus-app/quarkus-run.jar",
"npm start"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optimizer-modifying-ER-template-IDE |
Chapter 8. Ansible Automation Platform documentation | Chapter 8. Ansible Automation Platform documentation Red Hat Ansible Automation Platform 2.5 documentation includes significant feature updates as well as documentation enhancements and offers an improved user experience. The following are documentation enhancements in Ansible Automation Platform 2.5: The Setting up an automation controller token chapter that previously existed has been deprecated and replaced with the Setting up a Red Hat Ansible Automation Platform credential topic. As the Event-Driven Ansible controller is now integrated with centralized authentication and the Platform UI, this method simplifies the authentication process required for rulebook activations moving forward. Documentation changes for 2.5 reflect terminology and product changes. Additionally, we have consolidated content into fewer documents. The following table summarizes title changes for the 2.5 release. Version 2.4 document title Version 2.5 document title Red Hat Ansible Automation Platform release notes Release notes NA New: Using automation analytics Red Hat Ansible Automation Platform planning guide Planning your installation Containerized Ansible Automation Platform installation guide (Technology Preview release) Containerized installation (First Generally Available release) Deploying the Ansible Automation Platform operator on OpenShift Container Platform Installing on OpenShift Container Platform Getting started with automation controller Getting started with automation hub Getting started with Event-Driven Ansible New: Getting started with Ansible Automation Platform Installing and configuring central authentication for the Ansible Automation Platform Access management and authentication Getting started with Ansible playbooks Getting started with Ansible playbooks Ansible Automation Platform operations guide Operating Ansible Automation Platform Ansible Automation Platform automation mesh for operator-based installation Automation mesh for managed cloud or operator environments Ansible Automation Platform automation mesh for VM-based installation Automation mesh for VM environments Performance considerations for operator-based installation Performance considerations for operator environments Ansible Automation Platform operator backup and recovery guide Backup and recovery for operator environments Troubleshooting Ansible Automation Platform Troubleshooting Ansible Automation Platform Ansible Automation Platform hardening guide Not available for 2.5 release; to be published at a later date automation controller user guide Using automation execution automation controller administration guide Configuring automation execution automation controller API overview Automation execution API overview automation controller API reference Automation execution API reference automation controller CLI reference Automation execution CLI reference Event-Driven Ansible user guide Using automation decisions Managing content in automation hub - Managing automation content - Automation content API reference Ansible security automation guide Ansible security automation guide Using the automation calculator Viewing reports about your Ansible automation environment Evaluating your automation controller job runs using the job explorer Planning your automation jobs using the automation savings planner Using automation analytics Ansible Automation Platform creator guide Developing automation content Automation content navigator creator guide Using content navigator Creating and consuming execution environments Creating and using execution environments Installing Ansible plug-ins for Red Hat Developer Hub Installing Ansible plug-ins for Red Hat Developer Hub Using Ansible plug-ins for Red Hat Developer Hub Using Ansible plug-ins for Red Hat Developer Hub | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/release_notes/docs-2.5-intro |
Chapter 1. Release notes | Chapter 1. Release notes Note For additional information about the OpenShift Serverless life cycle and supported platforms, refer to the OpenShift Operator Life Cycles . Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Serverless releases on OpenShift Container Platform. For an overview of OpenShift Serverless functionality, see About OpenShift Serverless . Note OpenShift Serverless is based on the open source Knative project. For details about the latest Knative component releases, see the Knative blog . 1.1. About API versions API versions are an important measure of the development status of certain features and custom resources in OpenShift Serverless. Creating resources on your cluster that do not use the correct API version can cause issues in your deployment. The OpenShift Serverless Operator automatically upgrades older resources that use deprecated versions of APIs to use the latest version. For example, if you have created resources on your cluster that use older versions of the ApiServerSource API, such as v1beta1 , the OpenShift Serverless Operator automatically updates these resources to use the v1 version of the API when this is available and the v1beta1 version is deprecated. After they have been deprecated, older versions of APIs might be removed in any upcoming release. Using deprecated versions of APIs does not cause resources to fail. However, if you try to use a version of an API that has been removed, it will cause resources to fail. Ensure that your manifests are updated to use the latest version to avoid issues. 1.2. Generally Available and Technology Preview features Features that are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features. The following table provides information about which OpenShift Serverless features are GA and which are TP: Table 1.1. Generally Available and Technology Preview features tracker Feature 1.33 1.34 1.35 Eventing Transport encryption - TP TP Serving Transport encryption - TP TP OpenShift Serverless Logic GA GA GA ARM64 support TP TP TP Custom Metrics Autoscaler Operator (KEDA) TP TP TP kn event plugin TP TP TP Pipelines-as-code TP TP TP new-trigger-filters TP TP TP Go function using S2I builder TP TP GA Installing and using Serverless on single-node OpenShift GA GA GA Using Service Mesh to isolate network traffic with Serverless TP TP TP Overriding liveness and readiness in functions GA GA GA kn func GA GA GA Quarkus functions GA GA GA Node.js functions GA GA GA TypeScript functions GA GA GA Python functions TP TP TP Service Mesh mTLS GA GA GA emptyDir volumes GA GA GA HTTPS redirection GA GA GA Kafka broker GA GA GA Kafka sink GA GA GA Init containers support for Knative services GA GA GA PVC support for Knative services GA GA GA Namespace-scoped brokers TP TP TP multi-container support GA GA GA 1.3. Deprecated and removed features Some features that were Generally Available (GA) or a Technology Preview (TP) in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Serverless and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Serverless, refer to the following table: Table 1.2. Deprecated and removed features tracker Feature 1.30 1.31 1.32 1.33 1.34 1.35 EventTypes v1beta1 API - - Deprecated Deprecated Deprecated Deprecated domain-mapping and domain-mapping-webhook deployments - - Removed Removed Removed Removed Red Hat OpenShift Service Mesh with Serverless when Kourier is enabled - - Deprecated Deprecated Deprecated Deprecated NamespacedKafka annotation Deprecated Deprecated Deprecated Deprecated Deprecated Deprecated enable-secret-informer-filtering annotation Deprecated Deprecated Deprecated Deprecated Deprecated Deprecated Serving and Eventing v1alpha1 API Removed Removed Removed Removed Removed Removed kn func emit ( kn func invoke in 1.21+) Removed Removed Removed Removed Removed Removed KafkaBinding API Removed Removed Removed Removed Removed Removed 1.4. Red Hat OpenShift Serverless 1.35 OpenShift Serverless 1.35 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.4.1. New features OpenShift Serverless now uses Knative Serving 1.15. OpenShift Serverless now uses Knative Eventing 1.15. OpenShift Serverless now uses Kourier 1.15. OpenShift Serverless now uses Knative ( kn ) CLI 1.15. OpenShift Serverless now uses Knative for Apache Kafka 1.15. The kn func CLI plugin now uses func 1.16. Go functions using S2I builder are now available as a Generally Available (GA) feature for Linux and Mac developers, allowing them to implement and build Go functions on these platforms. It is now possible to automatically discover and register EventTypes based on the structure of incoming events, simplifying the overall configuration and management of EventTypes . Knative Event catalog is now available in OpenShift Developer Console (ODC). You can explore the catalog to discover different event types, along with their descriptions and associated metadata, making it easier to understand the system capabilities and functionalities. Knative Eventing now supports long-running background jobs. This feature separates resource-intensive or time-consuming tasks from the primary event processing flow, boosting application responsiveness and scalability. Autoscaling for Knative Kafka subscriptions is now enhanced with Kubernetes Event-Driven Autoscaling (KEDA) as a Technology Preview (TP) feature. Autoscaling with CMA/KEDA optimizes resource allocation for Kafka triggers and KafkaSource objects, boosting performance in event-driven workloads by enabling dynamic scaling of Kafka consumer resources. OpenShift Serverless Logic now integrates with Prometheus and Grafana to provide enhanced monitoring support. OpenShift Serverless Logic workflows deployed using the dev or preview profile are now automatically configured to generate monitoring metrics for Prometheus. The Jobs Service supporting service can now be scaled to zero by configuring the spec.services.jobService.podTemplate.replicas field in the SonataFlowPlatform custom resource (CR). OpenShift Serverless Logic workflows deployed with the preview and gitops profiles are now automatically configured to send grouped events to the Data Index, optimizing event traffic. A more comprehensive list of errors in the workflow definition is now provided, rather than only displaying the first detected error. OpenShift Serverless Logic is now certified for use with PostgreSQL version 15.9 . Event performance between OpenShift Serverless Logic workflows and the Data Index is improved through event batching. Set kogito.events.grouping=true to group events. For further optimization, enable kogito.events.grouping.binary=true to reduce the size of grouped events with an alternate serialization algorithm. To compress these events, set kogito.events.grouping.compress=true , which lowers event size at the cost of additional CPU usage. Compensation states are now invoked when a workflow is aborted. OpenShift Serverless Logic now supports configuring the Knative Eventing system to produce and consume events for workflows and supporting services. The secret configurations for the Broker and KafkaChannel (Apache Kafka) have been unified. 1.4.2. Fixed issues Previously, Horizontal Pod Autoscaler (HPA) scaled down the Activator component prematurely, causing long-running requests against a Knative Service to terminate. This issue is now fixed. The terminationGracePeriodSeconds value is automatically set according to the max-revision-timeout-seconds configuration for Knative revisions. Previously, requests to a Knative Service with a slow back end could time out because the default Red Hat OpenShift Serverless route timeout was too short. You can now configure the route HAProxy timeout by specifying an environment variable in the Operator Subscription object for OpenShift Serverless as follows: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: # ... spec: channel: stable config: env: - name: ROUTE_HAPROXY_TIMEOUT value: '900' 1.5. Red Hat OpenShift Serverless 1.34.1 OpenShift Serverless 1.34.1 is now available. This release of OpenShift Serverless addresses identified Common Vulnerabilities and Exposures (CVEs). Fixed issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.5.1. Fixed issues Previously, requests to Knative Services with slow back-end responses could fail due to the default OpenShift Route timeout being too short. This issue has been addressed by making the haproxy.router.openshift.io/timeout setting configurable for automatically created routes. You can now adjust the timeout by setting the ROUTE_HAPROXY_TIMEOUT environment variable in the OpenShift Serverless Operator Subscription configuration as follows: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: ... spec: channel: stable config: env: - name: ROUTE_HAPROXY_TIMEOUT value: '900' Previously, long-running requests to Knative Services could be prematurely terminated if the Activator component was scaled down by using the HorizontalPodAutoscaler field. This issue has been resolved. The terminationGracePeriodSeconds field for the Activator is now automatically aligned with the max-revision-timeout-seconds setting configured for Knative revisions. 1.6. Red Hat OpenShift Serverless 1.34 OpenShift Serverless 1.34 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.6.1. New features OpenShift Serverless now uses Knative Serving 1.14. OpenShift Serverless now uses Knative Eventing 1.14. OpenShift Serverless now uses Kourier 1.14. OpenShift Serverless now uses Knative ( kn ) CLI 1.14. OpenShift Serverless now uses Knative for Apache Kafka 1.14. The kn func CLI plugin now uses func 1.15. OpenShift Serverless Logic now supports multiple configuration for OpenAPI within the same namespace. The management console for OpenShift Serverless Logic is now available as a Technology Preview (TP) feature for streamlining the development process. OpenShift Serverless Logic 1.34 introduces a new feature that allows workflows to access different OpenShift Container Platform clusters through configuration. This feature enables users to define REST calls within a workflow to seamlessly interact with multiple clusters. In OpenShift Serverless Logic, the Job Service liveness checks is now enhanced to limit the time required to retrieve the leader status. A new system property, kogito.jobs-service.management.leader-check.expiration-in-seconds , has been introduced, allowing you to configure the maximum time allowed for the leader status check. Automatic EventType registration is an Eventing feature is now available as a Technology Preview (TP). It automatically creates EventTypes objects based on processed events on the broker ingress and in-memory channels, improving the experience of consuming and creating EventTypes . Encryption Serving is now available as a Technology Preview (TP) feature. Startup probes are now supported, helping to reduce cold start times for faster application startup and improved performance. These probes are particularly useful for containers with slow startup processes. OpenShift Serverless Serving transport encryption feature allows transporting data over secured and encrypted HTTPS connections using TLS. This is now available as a Technology Preview (TP) feature. Go functions using S2I builder are now available as a Technology Preview (TP) feature for Linux and Mac developers, allowing them to implement and build Go functions on these platforms. Multi-container support for Knative Serving allows you to use a single Knative service to deploy a multi-container pod. It also supports the readiness and liveness probe values for multiple containers. Autoscaling for Knative Kafka triggers is now enhanced with KEDA (Kubernetes Event-Driven Autoscaling) as a Technology Preview (TP). Autoscaling using CMA/KEDA further enhances performance by optimizing resource allocation for Kafka triggers and KafkaSource objects, ensuring better scalability in event-driven workloads. Knative Eventing now offers support for data in transit encryption (Eventing TLS) as a Technology Preview (TP) feature. You can configure Knative Eventing components to expose HTTPS addresses as well as add user-provided CA trust bundles to clients. 1.6.2. Fixed issues Previously, KafkaSource objects would incorrectly remain in the Ready status even when the KafkaSource.spec.net.tls.key failed to load. This issue has been resolved. An error is now reported when creating a Kafka Broker , KafkaChannel , KafkaSource , or KafkaSink object with unsupported TLS certificates in PKCS #1 (Public-Key Cryptography Standards #1) format, ensuring proper handling and notification of configuration issues. The Eventing controller incorrectly requeued the wrong object type ( Namespace ), causing "resource not found" log errors. This issue is now resolved, and the controller now handles object requeuing, ensuring more accurate logging and resource management. 1.7. Red Hat OpenShift Serverless 1.33.3 OpenShift Serverless 1.33.3 is now available, addressing identified Common Vulnerabilities and Exposures (CVEs) to enhance security and reliability. 1.8. Red Hat OpenShift Serverless 1.33.2 OpenShift Serverless 1.33.2 is now available. Fixed issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.8.1. Fixed issues Previously, creating Knative installation resources like KnativeServing or KnativeEventing in a user namespace triggered an infinite reconciliation loop in the OpenShift Serverless Operator. This issue has been resolved by reintroducing an admission webhook that prevents the creation of Knative installation resources in any namespace other than knative-serving or knative-eventing . Previously, post-install batch jobs were removed after a certain period, leaving privileged service accounts unbound. This caused compliance systems to flag the issue. The problem has been resolved by retaining completed jobs, ensuring that service accounts remain bound. 1.9. Red Hat OpenShift Serverless 1.33 OpenShift Serverless 1.33 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.9.1. New features OpenShift Serverless now uses Knative Serving 1.12. OpenShift Serverless now uses Knative Eventing 1.12. OpenShift Serverless now uses Kourier 1.12. OpenShift Serverless now uses Knative ( kn ) CLI 1.12. OpenShift Serverless now uses Knative for Apache Kafka 1.12. The kn func CLI plugin now uses func 1.14. OpenShift Serverless Logic is now generally available (GA). This release includes an overview of OpenShift Serverless Logic; instructions on creating, running, and deploying workflows; and guidelines for the installation and uninstallation of OpenShift Serverless Logic Operator. Additionally, it includes steps for configuring OpenAPI services and endpoints, and techniques for troubleshooting the services. For more information, see OpenShift Serverless Logic overview . You can also refer to the additional documentation. For more details, see the Serverless Logic documentation . OpenShift Serverless on ARM64 is now available as Technology Preview. The NamespacedKafka annotation is now deprecated. Use the standard Kafka broker without data plane isolation instead. When enabling the automatic EventType auto-creation, you can now easily discover events available within the cluster. This functionality is available as a Developer Preview. You can now explore the Knative Eventing monitoring dashboards directly within the Observe tab of the developer view in the OpenShift Developer Console. You can now use the Custom Metrics Autoscaler Operator to autoscale Knative Eventing sources for Apache Kafka sources, defined by a KafkaSource object. This functionality is available as a Technology Preview feature, offering enhanced scalability and efficiency for Kafka-based event sources within Knative Eventing. You can now customize the internal Kafka topic properties when creating a Knative Broker with Kafka implementation. This improves efficiency and simplifies management. The new trigger filters feature is now available as a Technology Preview. These filters are enabled by default and allows users to specify a set of filter expressions, where each expression evaluates to either true or false for each event. 1.9.2. Known issues Due to different mount point permissions, direct upload on a cluster build does not work on IBM zSystems (s390x) and IBM Power (ppc64le). Building and deploying a function using Podman version 4.6 fails with the invalid pull policy "1" error. To work around this issue, use Podman version 4.5. 1.10. Red Hat OpenShift Serverless 1.32.2 OpenShift Serverless 1.32.2 is now available. Fixed issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes: 1.10.1. Fixed issues Previously, post-install batch jobs were removed after a certain period, leaving privileged service accounts unbound. This caused compliance systems to flag the issue. The problem has been resolved by retaining completed jobs, ensuring that service accounts remain bound. 1.11. Red Hat OpenShift Serverless 1.32 OpenShift Serverless 1.32 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.11.1. New features OpenShift Serverless now uses Knative Serving 1.11. OpenShift Serverless now uses Knative Eventing 1.11. OpenShift Serverless now uses Kourier 1.11. OpenShift Serverless now uses Knative ( kn ) CLI 1.11. OpenShift Serverless now uses Knative for Apache Kafka 1.11. The kn func CLI plugin now uses func 1.13. Serverless Logic, which is available as a Technology Preview (TP) feature, has been updated. See the Serverless Logic documentation for usage instructions. You can configure the OpenShift Serverless Functions readiness and liveness probe settings for the user container and queue-proxy container. OpenShift Serverless Functions now supports OpenShift Pipelines versions from 1.10 till 1.14 (latest). The older versions of OpenShift Pipelines are no longer compatible with OpenShift Serverless Functions. On-cluster function building, including using Pipelines as Code is now supported on IBM zSystems (s390x) and IBM Power (ppc64le) on OpenShift Data Foundation storage only. You can now subscribe a function to a set of events by using the func subscribe command. This links your function to CloudEvent objects defined by your filters and enables automated responses. The Knative Serving TLS encryption feature for internal traffic is now deprecated. It was a Tech Preview feature. The functionality with the internal-encryption configuration flag is no longer available and it will be replaced by new configuration flags in a future release. The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , is added by default to the net-istio and net-kourier controller pods. The domain-mapping and domain-mapping-webhook deployments functionality in the knative-serving namespace is now removed. They are now integrated with Serving Webhook and Serving Controller. If you set spec.config.domain field in the KnativeServing custom resource (CR), the default external domain will no longer auto-populates the config-domain config map in the knative-serving namespace. Now, you must configure the config-domain config map manually to ensure accurate domain settings. You can now use the gRPC health probe for net-kourier deployments. The the Kourier Controller now uses a standard Kubernetes gRPC health probe for both readiness and liveness, replacing its use of exec and custom commands. The timeoutSeconds value has been adjusted from 100 milliseconds to 1 second to ensure more reliable probe responses. The new trigger filters feature is now available as a Technology Preview. The new trigger filters are now enabled by default. It allows users to specify a set of filter expressions, where each expression evaluates to either true or false for each event. Knative Eventing now offers support for data in transit encryption (Eventing TLS) as a developer preview. You can configure Knative Eventing components to expose HTTPS addresses as well as add user-provided CA trust bundles to clients. OpenShift Serverless now supports custom OpenShift CA bundle injection for system components. For more information, see Configuring a custom PKI . You can now use the Custom Metrics Autoscaler Operator to autoscale Knative Eventing sources for Apache Kafka sources. This functionality is available as a developer preview, offering enhanced scalability and efficiency for Kafka-based event sources within Knative Eventing. You can now explore the Knative Eventing monitoring dashboards directly within the Observe tab of the Developer view in the OpenShift Developer Console. The support for EventTypes v1beta1 in Knative shipped is deprecated in OpenShift Serverless 1.32. In OpenShift Serverless 1.32, the Knative CLI uses the EventType v1beta2 API to facilitate the new reference model. In releases, the kn CLI is not backward compatible with the EventType API v1beta1 and is limited to the kn eventtypes sub-commands group. Therefore, it is recommended to use a matching kn version for the best user experience. 1.11.2. Fixed issues The default CPU limit is now increased for 3scale-kourier-gateways from 500m to 1s . When more than 500 Knative Service instances are created, it could lead to readiness and liveness probe failures in the 3scale-kourier-gateways pod due to CPU resource exhaustion. This adjustment aims to reduce such failures and ensures smoother operation even under heavy loads. 1.11.3. Known issues Due to different mount point permissions, direct upload on a cluster build does not work on IBM zSystems (s390x) and IBM Power (ppc64le). Building and deploying a function using Podman version 4.6 fails with the invalid pull policy "1" error. To work around this issue, use Podman version 4.5. 1.12. Red Hat OpenShift Serverless 1.31 OpenShift Serverless 1.31 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.12.1. New features OpenShift Serverless now uses Knative Serving 1.10. OpenShift Serverless now uses Knative Eventing 1.10. OpenShift Serverless now uses Kourier 1.10. OpenShift Serverless now uses Knative ( kn ) CLI 1.10. OpenShift Serverless now uses Knative for Apache Kafka 1.10. The kn func CLI plug-in now uses func 1.11. OpenShift Serverless multi-tenancy with Service Mesh is now available as a Technology Preview (TP) feature. Serverless Logic, which is available as a Technology Preview (TP) feature, has been updated. See the Serverless Logic documentation for usage instructions. OpenShift Serverless can now be installed and used on single-node OpenShift. You can now configure a persistent volume claim (PVC) for an existing PersistentVolume object to use with a Serverless function. When specifying Kourier for Ingress and using DomainMapping , the TLS for OpenShift Route is set to passthrough, and TLS is handled by the Kourier Gateway. Beginning with Serverless 1.31, it is possible to specify the enabled cipher suites on the side of the Kourier Gateway. Integrating Red Hat OpenShift Service Mesh with Serverless when Kourier is enabled is now deprecated. Use net-istio instead of net-kourier for Service Mesh integration. See the "Integrating Red Hat OpenShift Service Mesh with Serverless" section for details. The PodDistruptionBudget and HorizontalPodAutoscaler objects have been added for the 3scale-kourier-gateway deployment. PodDistruptionBudget is used to define the minimum availability requirements for pods in a deployment. HorizontalPodAutoscaler is used to automatically scale the number of pods in the deployment based on demand or on your custom metrics. Now you can change the pattern for Apache Kafka topic names used by Knative brokers and channels for Apache Kafka. The DomainMapping v1alpha1 custom resource definition (CRD) is now deprecated. Use v1beta1 CRD instead. The NamespacedKafka annotation, which was a Technology Preview (TP) feature, is now deprecated in favor of the standard Kafka broker with no data plane isolation. 1.12.2. Fixed issues Previously, when deploying Knative Eventing with full Red Hat OpenShift Service Mesh integration and with STRICT peer authentication, the PingSource adapter metrics were unavailable. This has been fixed, and the PingSource adapter metrics are now collected using a different job and service label value. The value was pingsource-mt-adapter , the new value is pingsource-mt-adapter-sm-service . 1.13. Red Hat OpenShift Serverless 1.30.2 OpenShift Serverless 1.30.2 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. This release of OpenShift Serverless addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later versions. Notably, this update addresses CVE-2023-44487 - HTTP/2 Rapid Stream Reset by disabling HTTP/2 transport on Serving, Eventing webhooks, and RBAC proxy containers. 1.14. Red Hat OpenShift Serverless 1.30.1 OpenShift Serverless 1.30.1 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. This release of OpenShift Serverless addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.11 and later versions. 1.15. Red Hat OpenShift Serverless 1.30 OpenShift Serverless 1.30 is now available. New features, updates, and known issues that relate to OpenShift Serverless on OpenShift Container Platform are included in the following. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not been submitted for Federal Information Processing Standards (FIPS) validation. Although Red Hat cannot commit to a specific timeframe, we expect to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Information on updates will be available in the Compliance Activities and Government Standards Knowledgebase article . 1.15.1. New features OpenShift Serverless now uses Knative Serving 1.9. OpenShift Serverless now uses Knative Eventing 1.9. OpenShift Serverless now uses Kourier 1.9. OpenShift Serverless now uses Knative ( kn ) CLI 1.9. OpenShift Serverless now uses Knative for Apache Kafka 1.9. The kn func CLI plug-in now uses func 1.10.1. OpenShift Serverless now runs on HyperShift-hosted clusters. OpenShift Serverless now runs on single-node OpenShift. Developer Experience around OpenShift Serverless is now available through OpenShift Toolkit, an OpenShift IDE Extension for the Visual Studio Code (VSCode). The extension can be installed from the VSCode Extension Tab and VSCode Marketplace. See the Marketplace page for the Visual Studio Code OpenShift Toolkit extension . OpenShift Serverless Functions nows supports Red Hat OpenShift Pipelines versions 1.10 and 1.11. Older versions of Red Hat OpenShift Pipelines are no longer compatible with OpenShift Serverless Functions. Serverless Logic is now available as a Technology Preview (TP) feature. See the Serverless Logic documentation for details. Beginning with OpenShift Serverless 1.30.0, the following runtime environments are supported on IBM zSystems using the s2i builder: NodeJS Python TypeScript Quarkus Eventing integration with Red Hat OpenShift Service Mesh is now available as a Technology Preview (TP) feature. The integration includes the following: PingSource ApiServerSource Knative Source for Apache Kafka Knative Broker for Apache Kafka Knative Sink for Apache Kafka ContainerSource SinkBinding InMemoryChannel KafkaChannel Channel-based Knative Broker Pipelines-as-code for OpenShift Serverless Functions is now available as a Technology Preview (TP). You can now configure the burst and queries per second (QPS) values for net-kourier . OpenShift Serverless Functions users now have the ability to override the readiness and liveness probe values in the func.yaml file for individual Quarkus functions. See "Functions development reference guide" for guidance on Quarkus, TypeScript, and Node.js functions. Beginning with OpenShift Serverless 1.30.0, Kourier controller and gateway manifests have the following limits and requests by default: requests: cpu: 200m memory: 200Mi limits: cpu: 500m memory: 500Mi See the "Overriding Knative Serving system deployment configurations" section of OpenShift Serverless documentation. The NamespacedKafka annotation, which was a Technology Preview (TP) feature, is now deprecated in favor of the standard Kafka broker with no data plane isolation. 1.15.2. Fixed issues Previously, the 3scale-kourier-gateway pod was sending thousands of net-kourier-controller DNS queries daily. New queries were being sent for each NXDOMAIN reply. This continued until the correct DNS query was produced. The query now has the net-kourier-controller.knative-serving-ingress.svc.<cluster domain>. fully-qualified domain name (FQDN) by default, which solves the problem. 1.15.3. Known issues Building and deploying a function using Podman version 4.6 fails with the invalid pull policy "1" error. To work around this issue, use Podman version 4.5. On-cluster function building, including using Pipelines-as-code, is not supported on IBM zSystems and IBM Power. Buildpack builder is not supported on IBM zSystems and IBM Power. Additional resources Overriding system deployment configurations 1.16. Red Hat OpenShift Serverless 1.29.1 OpenShift Serverless 1.29.1 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. This release of OpenShift Serverless addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.10 and later versions. 1.16.1. Fixed issues Previously, the net-kourier-controller sometimes failed to start due to the liveness probe error. This has been fixed. Additional resources Knowledgebase solution for the net-kourier-controller liveness probe error 1.17. Red Hat OpenShift Serverless 1.29 OpenShift Serverless 1.29 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 is yet to be submitted for Federal Information Processing Standards (FIPS) validation. Although Red Hat cannot commit to a specific timeframe, we expect to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Information on updates will be available in the Compliance Activities and Government Standards Knowledgebase article . 1.17.1. New features OpenShift Serverless now uses Knative Serving 1.8. OpenShift Serverless now uses Knative Eventing 1.8. OpenShift Serverless now uses Kourier 1.8. OpenShift Serverless now uses Knative ( kn ) CLI 1.8. OpenShift Serverless now uses Knative for Apache Kafka 1.8. The kn func CLI plug-in now uses func 1.10. Beginning with OpenShift Serverless 1.29, the different product versions are available as follows: The latest release is available through the stable channel. Releases older than the latest are available through the version-based channels. To use these, update the channel parameter in the subscription object YAML file from stable to the corresponding version-based channel, such as stable-1.29 . This change allows you to receive updates not only for the latest release, but also for releases in the Maintenance phase. Additionally, you can lock the version of the Knative ( kn ) CLI. For details, see section "Installing the Knative CLI". You can now create OpenShift Serverless functions through developer console using OpenShift Container Platform Pipelines. Multi-container support for Knative Serving is now generally available (GA). This feature allows you to use a single Knative service to deploy a multi-container pod. OpenShift Serverless functions can now override the readiness and liveness probe values in the func.yaml file for individual Node.js and TypeScript functions. You can now configure your function to re-deploy automatically to the cluster when its source code changes in the GitHub repository. This allows for more seamless CI/CD integration. Eventing integration with Service Mesh is now available as developer preview feature. The integration includes: PingSource , ApiServerSource , Knative Source for Apache Kafka, Knative Broker for Apache Kafka, Knative Sink for Apache Kafka, ContainerSource , and SinkBinding . This release includes the upgraded Developer Preview for OpenShift Serverless Logic. API version v1alpha1 of the Knative Operator Serving and Eventings CRDs has been removed. You need to use the v1beta1 version instead. This does not affect the existing installations, because CRDs are updated automatically when upgrading the Serverless Operator. 1.17.2. Known issues When updating the secret specified in DomainMapping, simply updating the secret does not trigger the reconcile loop. You need to either rename the secret or delete the Knative Ingress resource to trigger the reconcile loop. Webhook Horizontal Pod Autoscaler (HPA) settings are overridden by the OpenShift Serverless Operator. As a result, it fails to scale for higher workloads. To work around this issue, manually set the initial replica value that corresponds to your workload. KafkaSource resources created before Red Hat OpenShift Serverless 1.27 get stuck when being deleted. To work around the issue, after starting to delete a KafkaSource , remove the finalizer from the resource. The net-kourier-controller might not be able to start due to the liveness probe error. You can work around the problem using the Knowledgebase solution. Additional resources Knowledgebase solution for the net-kourier-controller liveness probe error Red Hat OpenShift Serverless Logic documentation 1.18. Red Hat OpenShift Serverless 1.28 OpenShift Serverless 1.28 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 is yet to be submitted for Federal Information Processing Standards (FIPS) validation. Although Red Hat cannot commit to a specific timeframe, we expect to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Information on updates will be available in the Compliance Activities and Government Standards Knowledgebase article . 1.18.1. New features OpenShift Serverless now uses Knative Serving 1.7. OpenShift Serverless now uses Knative Eventing 1.7. OpenShift Serverless now uses Kourier 1.7. OpenShift Serverless now uses Knative ( kn ) CLI 1.7. OpenShift Serverless now uses Knative broker implementation for Apache Kafka 1.7. The kn func CLI plug-in now uses func 1.9.1 version. Node.js and TypeScript runtimes for OpenShift Serverless Functions are now Generally Available (GA). Python runtime for OpenShift Serverless Functions is now available as a Technology Preview. Multi-container support for Knative Serving is now available as a Technology Preview. This feature allows you to use a single Knative service to deploy a multi-container pod. In OpenShift Serverless 1.29 or later, the following components of Knative Eventing will be scaled down from two pods to one: imc-controller imc-dispatcher mt-broker-controller mt-broker-filter mt-broker-ingress The serverless.openshift.io/enable-secret-informer-filtering annotation for the Serving CR is now deprecated. The annotation is valid only for Istio, and not for Kourier. With OpenShift Serverless 1.28, the OpenShift Serverless Operator allows injecting the environment variable ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID for both net-istio and net-kourier . If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. In one of the following OpenShift Serverless releases, secret filtering will become enabled by default. To prevent failures, label your secrets in advance. 1.18.2. Known issues Currently, runtimes for Python are not supported for OpenShift Serverless Functions on IBM Power, IBM zSystems, and IBM(R) LinuxONE. Node.js, TypeScript, and Quarkus functions are supported on these architectures. On the Windows platform, Python functions cannot be locally built, run, or deployed using the Source-to-Image builder due to the app.sh file permissions. To work around this problem, use the Windows Subsystem for Linux. KafkaSource resources created before Red Hat OpenShift Serverless 1.27 get stuck when being deleted. To work around the issue, after starting to delete a KafkaSource , remove the finalizer from the resource. 1.19. Red Hat OpenShift Serverless 1.27 OpenShift Serverless 1.27 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. Important OpenShift Serverless 1.26 is the earliest release that is fully supported on OpenShift Container Platform 4.12. OpenShift Serverless 1.25 and older does not deploy on OpenShift Container Platform 4.12. For this reason, before upgrading OpenShift Container Platform to version 4.12, first upgrade OpenShift Serverless to version 1.26 or 1.27. 1.19.1. New features OpenShift Serverless now uses Knative Serving 1.6. OpenShift Serverless now uses Knative Eventing 1.6. OpenShift Serverless now uses Kourier 1.6. OpenShift Serverless now uses Knative ( kn ) CLI 1.6. OpenShift Serverless now uses Knative Kafka 1.6. The kn func CLI plug-in now uses func 1.8.1. Namespace-scoped brokers are now available as a Technology Preview. Such brokers can be used, for instance, to implement role-based access control (RBAC) policies. KafkaSink now uses the CloudEvent binary content mode by default. The binary content mode is more efficient than the structured mode because it uses headers in its body instead of a CloudEvent . For example, for the HTTP protocol, it uses HTTP headers. You can now use the gRPC framework over the HTTP/2 protocol for external traffic using the OpenShift Route on OpenShift Container Platform 4.10 and later. This improves efficiency and speed of the communications between the client and server. API version v1alpha1 of the Knative Operator Serving and Eventings CRDs is deprecated in 1.27. It will be removed in future versions. Red Hat strongly recommends to use the v1beta1 version instead. This does not affect the existing installations, because CRDs are updated automatically when upgrading the Serverless Operator. The delivery timeout feature is now enabled by default. It allows you to specify the timeout for each sent HTTP request. The feature remains a Technology Preview. 1.19.2. Fixed issues Previously, Knative services sometimes did not get into the Ready state, reporting waiting for the load balancer to be ready. This issue has been fixed. 1.19.3. Known issues Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-kourier pod to run out of memory on startup when too many secrets are present on the cluster. Namespace-scoped brokers might leave ClusterRoleBindings in the user namespace even after deletion of namespace-scoped brokers. If this happens, delete the ClusterRoleBinding named rbac-proxy-reviews-prom-rb-knative-kafka-broker-data-plane-{{.Namespace}} in the user namespace. 1.20. Red Hat OpenShift Serverless 1.26 OpenShift Serverless 1.26 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.20.1. New features OpenShift Serverless Functions with Quarkus is now GA. OpenShift Serverless now uses Knative Serving 1.5. OpenShift Serverless now uses Knative Eventing 1.5. OpenShift Serverless now uses Kourier 1.5. OpenShift Serverless now uses Knative ( kn ) CLI 1.5. OpenShift Serverless now uses Knative Kafka 1.5. OpenShift Serverless now uses Knative Operator 1.3. The kn func CLI plugin now uses func 1.8.1. Persistent volume claims (PVCs) are now GA. PVCs provide permanent data storage for your Knative services. The new trigger filters feature is now available as a Developer Preview. It allows users to specify a set of filter expressions, where each expression evaluates to either true or false for each event. To enable new trigger filters, add the new-trigger-filters: enabled entry in the section of the KnativeEventing type in the operator config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing ... ... spec: config: features: new-trigger-filters: enabled ... Knative Operator 1.3 adds the updated v1beta1 version of the API for operator.knative.dev . To update from v1alpha1 to v1beta1 in your KnativeServing and KnativeEventing custom resource config maps, edit the apiVersion key: Example KnativeServing custom resource config map apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing ... Example KnativeEventing custom resource config map apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing ... 1.20.2. Fixed issues Previously, Federal Information Processing Standards (FIPS) mode was disabled for Kafka broker, Kafka source, and Kafka sink. This has been fixed, and FIPS mode is now available. Additional resources Knative documentation on new trigger filters 1.21. Red Hat OpenShift Serverless 1.25.0 OpenShift Serverless 1.25.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.21.1. New features OpenShift Serverless now uses Knative Serving 1.4. OpenShift Serverless now uses Knative Eventing 1.4. OpenShift Serverless now uses Kourier 1.4. OpenShift Serverless now uses Knative ( kn ) CLI 1.4. OpenShift Serverless now uses Knative Kafka 1.4. The kn func CLI plugin now uses func 1.7.0. Integrated development environment (IDE) plugins for creating and deploying functions are now available for Visual Studio Code and IntelliJ . Knative Kafka broker is now GA. Knative Kafka broker is a highly performant implementation of the Knative broker API, directly targeting Apache Kafka. It is recommended to not use the MT-Channel-Broker, but the Knative Kafka broker instead. Knative Kafka sink is now GA. A KafkaSink takes a CloudEvent and sends it to an Apache Kafka topic. Events can be specified in either structured or binary content modes. Enabling TLS for internal traffic is now available as a Technology Preview. 1.21.2. Fixed issues Previously, Knative Serving had an issue where the readiness probe failed if the container was restarted after a liveness probe fail. This issue has been fixed. 1.21.3. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. The SinkBinding object does not support custom revision names for services. The Knative Serving Controller pod adds a new informer to watch secrets in the cluster. The informer includes the secrets in the cache, which increases memory consumption of the controller pod. If the pod runs out of memory, you can work around the issue by increasing the memory limit for the deployment. Additional resources for OpenShift Container Platform Serving transport encryption 1.22. Red Hat OpenShift Serverless 1.24.0 OpenShift Serverless 1.24.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.22.1. New features OpenShift Serverless now uses Knative Serving 1.3. OpenShift Serverless now uses Knative Eventing 1.3. OpenShift Serverless now uses Kourier 1.3. OpenShift Serverless now uses Knative kn CLI 1.3. OpenShift Serverless now uses Knative Kafka 1.3. The kn func CLI plugin now uses func 0.24. Init containers support for Knative services is now generally available (GA). OpenShift Serverless logic is now available as a Developer Preview. It enables defining declarative workflow models for managing serverless applications. For OpenShift Container Platform, you can now use the cost management service with OpenShift Serverless. 1.22.2. Fixed issues Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-istio-controller pod to run out of memory on startup when too many secrets are present on the cluster. It is now possible to enable secret filtering, which causes net-istio-controller to consider only secrets with a networking.internal.knative.dev/certificate-uid label, thus reducing the amount of memory needed. The OpenShift Serverless Functions Technology Preview now uses Cloud Native Buildpacks by default to build container images. 1.22.3. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. In OpenShift Serverless 1.23, support for KafkaBindings and the kafka-binding webhook were removed. However, an existing kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might remain, pointing to the kafka-source-webhook service, which no longer exists. For certain specifications of KafkaBindings on the cluster, kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might be configured to pass any create and update events to various resources, such as Deployments, Knative Services, or Jobs, through the webhook, which would then fail. To work around this issue, manually delete kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration from the cluster after upgrading to OpenShift Serverless 1.23: USD oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev 1.23. Red Hat OpenShift Serverless 1.23.0 OpenShift Serverless 1.23.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.23.1. New features OpenShift Serverless now uses Knative Serving 1.2. OpenShift Serverless now uses Knative Eventing 1.2. OpenShift Serverless now uses Kourier 1.2. OpenShift Serverless now uses Knative ( kn ) CLI 1.2. OpenShift Serverless now uses Knative Kafka 1.2. The kn func CLI plugin now uses func 0.24. It is now possible to use the kafka.eventing.knative.dev/external.topic annotation with the Kafka broker. This annotation makes it possible to use an existing externally managed topic instead of the broker creating its own internal topic. The kafka-ch-controller and kafka-webhook Kafka components no longer exist. These components have been replaced by the kafka-webhook-eventing component. The OpenShift Serverless Functions Technology Preview now uses Source-to-Image (S2I) by default to build container images. 1.23.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. If you delete a namespace that includes a Kafka broker, the namespace finalizer may fail to be removed if the broker's auth.secret.ref.name secret is deleted before the broker. Running OpenShift Serverless with a large number of Knative services can cause Knative activator pods to run close to their default memory limits of 600MB. These pods might be restarted if memory consumption reaches this limit. Requests and limits for the activator deployment can be configured by modifying the KnativeServing custom resource: apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi If you are using Cloud Native Buildpacks as the local build strategy for a function, kn func is unable to automatically start podman or use an SSH tunnel to a remote daemon. The workaround for these issues is to have a Docker or podman daemon already running on the local development computer before deploying a function. On-cluster function builds currently fail for Quarkus and Golang runtimes. They work correctly for Node, Typescript, Python, and Springboot runtimes. Additional resources for OpenShift Container Platform Source-to-Image 1.24. Red Hat OpenShift Serverless 1.22.0 OpenShift Serverless 1.22.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.24.1. New features OpenShift Serverless now uses Knative Serving 1.1. OpenShift Serverless now uses Knative Eventing 1.1. OpenShift Serverless now uses Kourier 1.1. OpenShift Serverless now uses Knative ( kn ) CLI 1.1. OpenShift Serverless now uses Knative Kafka 1.1. The kn func CLI plugin now uses func 0.23. Init containers support for Knative services is now available as a Technology Preview. Persistent volume claim (PVC) support for Knative services is now available as a Technology Preview. The knative-serving , knative-serving-ingress , knative-eventing and knative-kafka system namespaces now have the knative.openshift.io/part-of: "openshift-serverless" label by default. The Knative Eventing - Kafka Broker/Trigger dashboard has been added, which allows visualizing Kafka broker and trigger metrics in the web console. The Knative Eventing - KafkaSink dashboard has been added, which allows visualizing KafkaSink metrics in the web console. The Knative Eventing - Broker/Trigger dashboard is now called Knative Eventing - Channel-based Broker/Trigger . The knative.openshift.io/part-of: "openshift-serverless" label has substituted the knative.openshift.io/system-namespace label. Naming style in Knative Serving YAML configuration files changed from camel case ( ExampleName ) to hyphen style ( example-name ). Beginning with this release, use the hyphen style notation when creating or editing Knative Serving YAML configuration files. 1.24.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. 1.25. Red Hat OpenShift Serverless 1.21.0 OpenShift Serverless 1.21.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.25.1. New features OpenShift Serverless now uses Knative Serving 1.0 OpenShift Serverless now uses Knative Eventing 1.0. OpenShift Serverless now uses Kourier 1.0. OpenShift Serverless now uses Knative ( kn ) CLI 1.0. OpenShift Serverless now uses Knative Kafka 1.0. The kn func CLI plugin now uses func 0.21. The Kafka sink is now available as a Technology Preview. The Knative open source project has begun to deprecate camel-cased configuration keys in favor of using kebab-cased keys consistently. As a result, the defaultExternalScheme key, previously mentioned in the OpenShift Serverless 1.18.0 release notes, is now deprecated and replaced by the default-external-scheme key. Usage instructions for the key remain the same. 1.25.2. Fixed issues In OpenShift Serverless 1.20.0, there was an event delivery issue affecting the use of kn event send to send events to a service. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), TypeScript functions created with the http template failed to deploy on the cluster. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), deploying a function using the gcr.io registry failed with an error. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), creating a Springboot function project directory with the kn func create command and then running the kn func build command failed with an error message. This issue is now fixed. In OpenShift Serverless 1.19.0 ( func 0.19), some runtimes were unable to build a function by using podman. This issue is now fixed. 1.25.3. Known issues Currently, the domain mapping controller cannot process the URI of a broker, which contains a path that is currently not supported. This means that, if you want to use a DomainMapping custom resource (CR) to map a custom domain to a broker, you must configure the DomainMapping CR with the broker's ingress service, and append the exact path of the broker to the custom domain: Example DomainMapping CR apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1 The URI for the broker is then <domain-name>/<broker-namespace>/<broker-name> . 1.26. Red Hat OpenShift Serverless 1.20.0 OpenShift Serverless 1.20.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.26.1. New features OpenShift Serverless now uses Knative Serving 0.26. OpenShift Serverless now uses Knative Eventing 0.26. OpenShift Serverless now uses Kourier 0.26. OpenShift Serverless now uses Knative ( kn ) CLI 0.26. OpenShift Serverless now uses Knative Kafka 0.26. The kn func CLI plugin now uses func 0.20. The Kafka broker is now available as a Technology Preview. Important The Kafka broker, which is currently in Technology Preview, is not supported on FIPS. The kn event plugin is now available as a Technology Preview. The --min-scale and --max-scale flags for the kn service create command have been deprecated. Use the --scale-min and --scale-max flags instead. 1.26.2. Known issues OpenShift Serverless deploys Knative services with a default address that uses HTTPS. When sending an event to a resource inside the cluster, the sender does not have the cluster certificate authority (CA) configured. This causes event delivery to fail, unless the cluster uses globally accepted certificates. For example, an event delivery to a publicly accessible address works: USD kn event send --to-url https://ce-api.foo.example.com/ On the other hand, this delivery fails if the service uses a public address with an HTTPS certificate issued by a custom CA: USD kn event send --to Service:serving.knative.dev/v1:event-display Sending an event to other addressable objects, such as brokers or channels, is not affected by this issue and works as expected. The Kafka broker currently does not work on a cluster with Federal Information Processing Standards (FIPS) mode enabled. If you create a Springboot function project directory with the kn func create command, subsequent running of the kn func build command fails with this error message: [analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle As a workaround, you can change the builder property to gcr.io/paketo-buildpacks/builder:base in the function configuration file func.yaml . Deploying a function using the gcr.io registry fails with this error message: Error: failed to get credentials: failed to verify credentials: status code: 404 As a workaround, use a different registry than gcr.io , such as quay.io or docker.io . TypeScript functions created with the http template fail to deploy on the cluster. As a workaround, in the func.yaml file, replace the following section: buildEnvs: [] with this: buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build In func version 0.20, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.27. Red Hat OpenShift Serverless 1.19.0 OpenShift Serverless 1.19.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.27.1. New features OpenShift Serverless now uses Knative Serving 0.25. OpenShift Serverless now uses Knative Eventing 0.25. OpenShift Serverless now uses Kourier 0.25. OpenShift Serverless now uses Knative ( kn ) CLI 0.25. OpenShift Serverless now uses Knative Kafka 0.25. The kn func CLI plugin now uses func 0.19. The KafkaBinding API is deprecated in OpenShift Serverless 1.19.0 and will be removed in a future release. HTTPS redirection is now supported and can be configured either globally for a cluster or per each Knative service. 1.27.2. Fixed issues In releases, the Kafka channel dispatcher waited only for the local commit to succeed before responding, which might have caused lost events in the case of an Apache Kafka node failure. The Kafka channel dispatcher now waits for all in-sync replicas to commit before responding. 1.27.3. Known issues In func version 0.19, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.28. Red Hat OpenShift Serverless 1.18.0 OpenShift Serverless 1.18.0 is now available. New features, updates, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes. 1.28.1. New features OpenShift Serverless now uses Knative Serving 0.24.0. OpenShift Serverless now uses Knative Eventing 0.24.0. OpenShift Serverless now uses Kourier 0.24.0. OpenShift Serverless now uses Knative ( kn ) CLI 0.24.0. OpenShift Serverless now uses Knative Kafka 0.24.7. The kn func CLI plugin now uses func 0.18.0. In the upcoming OpenShift Serverless 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security. If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: config: network: defaultExternalScheme: "http" ... If you want the change to apply in 1.18.0 already, add the following YAML: ... spec: config: network: defaultExternalScheme: "https" ... In the upcoming OpenShift Serverless 1.19.0 release, the default service type by which the Kourier Gateway is exposed will be ClusterIP and not LoadBalancer . If you do not want this change to apply to your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: ingress: kourier: service-type: LoadBalancer ... You can now use emptyDir volumes with OpenShift Serverless. See the OpenShift Serverless documentation about Knative Serving for details. Rust templates are now available when you create a function using kn func . 1.28.2. Fixed issues The prior 1.4 version of Camel-K was not compatible with OpenShift Serverless 1.17.0. The issue in Camel-K has been fixed, and Camel-K version 1.4.1 can be used with OpenShift Serverless 1.17.0. Previously, if you created a new subscription for a Kafka channel, or a new Kafka source, a delay was possible in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reported a ready status. As a result, messages that were sent during the time when the data plane was not reporting a ready status, might not have been delivered to the subscriber or sink. In OpenShift Serverless 1.18.0, the issue is fixed and the initial messages are no longer lost. For more information about the issue, see Knowledgebase Article #6343981 . 1.28.3. Known issues Older versions of the Knative kn CLI might use older versions of the Knative Serving and Knative Eventing APIs. For example, version 0.23.2 of the kn CLI uses the v1alpha1 API version. On the other hand, newer releases of OpenShift Serverless might no longer support older API versions. For example, OpenShift Serverless 1.18.0 no longer supports version v1alpha1 of the kafkasources.sources.knative.dev API. Consequently, using an older version of the Knative kn CLI with a newer OpenShift Serverless might produce an error because the kn cannot find the outdated API. For example, version 0.23.2 of the kn CLI does not work with OpenShift Serverless 1.18.0. To avoid issues, use the latest kn CLI version available for your OpenShift Serverless release. For OpenShift Serverless 1.18.0, use Knative kn CLI 0.24.0. | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: # spec: channel: stable config: env: - name: ROUTE_HAPROXY_TIMEOUT value: '900'",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: spec: channel: stable config: env: - name: ROUTE_HAPROXY_TIMEOUT value: '900'",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing spec: config: features: new-trigger-filters: enabled",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing",
"oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1",
"kn event send --to-url https://ce-api.foo.example.com/",
"kn event send --to Service:serving.knative.dev/v1:event-display",
"[analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle",
"Error: failed to get credentials: failed to verify credentials: status code: 404",
"buildEnvs: []",
"buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"spec: config: network: defaultExternalScheme: \"http\"",
"spec: config: network: defaultExternalScheme: \"https\"",
"spec: ingress: kourier: service-type: LoadBalancer"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/about_openshift_serverless/serverless-release-notes |
Chapter 32. File | Chapter 32. File Both producer and consumer are supported The File component provides access to file systems, allowing files to be processed by any other Camel Components or messages from other components to be saved to disk. 32.1. Dependencies When using file with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-file-starter</artifactId> </dependency> 32.2. URI format Where directoryName represents the underlying file directory. Only directories Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option, e.g. by setting fileName=thefilename . Also, the starting directory must not contain dynamic expressions with USD{ } placeholders. Again use the fileName option to specify the dynamic part of the filename. Note Avoid reading files currently being written by another application Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock options and doneFileName option that you can use. See also the section Consuming files from folders where others drop files directly . 32.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 32.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 32.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 32.4. Component Options The File component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 32.5. Endpoint Options The File endpoint is configured using URI syntax: with the following path and query parameters: 32.5.1. Path Parameters (1 parameters) Name Description Default Type directoryName (common) Required The starting directory. File 32.5.2. Query Parameters (94 parameters) Name Description Default Type charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USD\\{file.name} and USD\\{file.name.} is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USD\\{date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean directoryMustExist (consumer (advanced)) Similar to the startingDirectoryMustExist option but this applies during polling (after starting the consumer). false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern extendedAttributes (consumer (advanced)) To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime. String inProgressRepository (consumer (advanced)) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer (advanced)) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionExceptionHandler (consumer (advanced)) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy probeContentType (consumer (advanced)) Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE_CONTENT_TYPE on the Message. false boolean processStrategy (consumer (advanced)) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcessStrategy resumeStrategy (consumer (advanced)) Set a resume strategy for files. This makes it possible to define a strategy for resuming reading files after the last point before stopping the application. See the FileConsumerResumeStrategy for implementation details. FileConsumerResumeStrategy startingDirectoryMustExist (consumer (advanced)) Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn't exist. false boolean startingDirectoryMustHaveAccess (consumer (advanced)) Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true in order to verify that the directory exists. Will thrown an exception if the directory doesn't have read and write permissions. false boolean appendChars (producer) Used to append characters (text) after writing files. This can for example be used to add new lines or other separators when writing and appending new files or existing files. To specify new-line (slash-n or slash-r) or tab (slash-t) characters then escape with an extra slash, eg slash-slash-n. String fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Enum values: Override Append Fail Ignore Move TryRename Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer (advanced)) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer (advanced)) Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String chmodDirectory (producer (advanced)) Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String eagerDeleteTargetFile (producer (advanced)) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean forceWrites (producer (advanced)) Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance. true boolean keepLastModified (producer (advanced)) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer (advanced)) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided. FileMoveExistingStrategy autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bufferSize (advanced) Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files). 131072 int copyAndDeleteOnRenameFail (advanced) Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component. true boolean renameUsingCopy (advanced) Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter. true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String excludeExt (filter) Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USD\\{date:now:yyyMMdd}. String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USD\\{file:size} 5000. String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USD\\{file:name}-USD\\{file:size}. String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String includeExt (filter) Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusiveReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. Enum values: none markerFile fileLock rename changed idempotent idempotent-changed idempotent-rename none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLockFiles (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockIdempotentReleaseAsync (lock) Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockIdempotentReleaseAsyncPoolSize (lock) The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. int readLockIdempotentReleaseDelay (lock) Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. int readLockIdempotentReleaseExecutorService (lock) To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. ScheduledExecutorService readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean shuffle (sort) To shuffle the list of files (sort in random order). false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator Note Default behavior for file producer By default it will override any existing file, if one exist with the same name. 32.6. Move and Delete operations Any move or delete operations is executed after (post command) the routing has completed; so during processing of the Exchange the file is still located in the inbox folder. Lets illustrate this with an example: from("file://inbox?move=.done").to("bean:handleOrder"); When a file is dropped in the inbox folder, the file consumer notices this and creates a new FileExchange that is routed to the handleOrder bean. The bean then processes the File object. At this point in time the file is still located in the inbox folder. After the bean completes, and thus the route is completed, the file consumer will perform the move operation and move the file to the .done sub-folder. The move and the preMove options are considered as a directory name (though if you use an expression such as File Language, or Simple then the result of the expression evaluation is the file name to be used. For example, if you set: then that's using the File language which we use return the file name to be used), which can be either relative or absolute. If relative, the directory is created as a sub-folder from within the folder where the file was consumed. By default, Camel will move consumed files to the .camel sub-folder relative to the directory where the file was consumed. If you want to delete the file after processing, the route should be: from("file://inbox?delete=true").to("bean:handleOrder"); We have introduced a pre move operation to move files before they are processed. This allows you to mark which files have been scanned as they are moved to this sub folder before being processed. from("file://inbox?preMove=inprogress").to("bean:handleOrder"); You can combine the pre move and the regular move: from("file://inbox?preMove=inprogress&move=.done").to("bean:handleOrder"); So in this situation, the file is in the inprogress folder when being processed and after it's processed, it's moved to the .done folder. 32.7. Fine grained control over Move and PreMove option The move and preMove options are Expression-based, so we have the full power of the File Language to do advanced configuration of the directory and name pattern. Camel will, in fact, internally convert the directory name you enter into a File Language expression. So when we enter move=.done Camel will convert this into: USD{file:parent}/.done/USD{file:onlyname} . This is only done if Camel detects that you have not provided a USD\{ } in the option value yourself. So when you enter a USD\{ } Camel will not convert it and thus you have the full power. So if we want to move the file into a backup folder with today's date as the pattern, we can do: 32.8. About moveFailed The moveFailed option allows you to move files that could not be processed successfully to another location such as an error folder of your choice. For example to move the files in an error folder with a timestamp you can use moveFailed=/error/USD{ file:name.noext }-USD{date:now:yyyyMMddHHmmssSSS}.USD{\'\'file:ext }. See more examples at 32.9. Message Headers The following headers are supported by this component: 32.9.1. File producer only Header Description CamelFileName Specifies the name of the file to write (relative to the endpoint directory). This name can be a String ; a String with a File Language or Simple language expression; or an Expression object. If it's null then Camel will auto-generate a filename based on the message unique ID. CamelFileNameProduced The actual absolute filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users with the name of the file that was written. CamelOverruleFileName Is used for overruling CamelFileName header and use the value instead (but only once, as the producer will remove this header after writing the file). The value can be only be a String. Notice that if the option fileName has been configured, then this is still being evaluated. 32.9.2. File consumer only Header Description CamelFileName Name of the consumed file as a relative file path with offset from the starting directory configured on the endpoint. CamelFileNameOnly Only the file name (the name with no leading paths). CamelFileAbsolute A boolean option specifying whether the consumed file denotes an absolute path or not. Should normally be false for relative paths. Absolute paths should normally not be used but we added to the move option to allow moving files to absolute paths. But can be used elsewhere as well. CamelFileAbsolutePath The absolute path to the file. For relative files this path holds the relative path instead. CamelFilePath The file path. For relative files this is the starting directory + the relative filename. For absolute files this is the absolute path. CamelFileRelativePath The relative path. CamelFileParent The parent path. CamelFileLength A long value containing the file size. CamelFileLastModified A Long value containing the last modified timestamp of the file. 32.10. Batch Consumer This component implements the Batch Consumer. 32.11. Exchange Properties, file consumer only As the file consumer implements the BatchConsumer it supports batching the files it polls. By batching we mean that Camel will add the following additional properties to the Exchange, so you know the number of files polled, the current index, and whether the batch is already completed. Property Description CamelBatchSize The total number of files that was polled in this batch. CamelBatchIndex The current index of the batch. Starts from 0. CamelBatchComplete A boolean value indicating the last Exchange in the batch. Is only true for the last entry. This allows you for instance to know how many files exist in this batch and for instance let the Aggregator2 aggregate this number of files. 32.12. Using charset The charset option allows for configuring an encoding of the files on both the consumer and producer endpoints. For example if you read utf-8 files, and want to convert the files to iso-8859-1, you can do: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") You can also use the convertBodyTo in the route. In the example below we have still input files in utf-8 format, but we want to convert the file content to a byte array in iso-8859-1 format. And then let a bean process the data. Before writing the content to the outbox folder using the current charset. from("file:inbox?charset=utf-8") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); If you omit the charset on the consumer endpoint, then Camel does not know the charset of the file, and would by default use "UTF-8". However you can configure a JVM system property to override and use a different default encoding with the key org.apache.camel.default.charset . In the example below this could be a problem if the files is not in UTF-8 encoding, which would be the default encoding for read the files. In this example when writing the files, the content has already been converted to a byte array, and thus would write the content directly as is (without any further encodings). from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); You can also override and control the encoding dynamic when writing files, by setting a property on the exchange with the key Exchange.CHARSET_NAME . For example in the route below we set the property with a value from a message header. from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .setProperty(Exchange.CHARSET_NAME, header("someCharsetHeader")) .to("file:outbox"); We suggest to keep things simpler, so if you pickup files with the same encoding, and want to write the files in a specific encoding, then favor to use the charset option on the endpoints. Notice that if you have explicit configured a charset option on the endpoint, then that configuration is used, regardless of the Exchange.CHARSET_NAME property. If you have some issues then you can enable DEBUG logging on org.apache.camel.component.file , and Camel logs when it reads/write a file using a specific charset. For example the route below will log the following: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") And the logs: 32.13. Common gotchas with folder and filenames When Camel is producing files (writing files) there are a few gotchas affecting how to set a filename of your choice. By default, Camel will use the message ID as the filename, and since the message ID is normally a unique generated ID, you will end up with filenames such as: ID-MACHINENAME-2443-1211718892437-1-0 . If such a filename is not desired, then you must provide a filename in the CamelFileName message header. The constant, Exchange.FILE_NAME , can also be used. The sample code below produces files using the message ID as the filename: from("direct:report").to("file:target/reports"); To use report.txt as the filename you have to do: from("direct:report").setHeader(Exchange.FILE_NAME, constant("report.txt")).to( "file:target/reports"); the same as above, but with CamelFileName : from("direct:report").setHeader("CamelFileName", constant("report.txt")).to( "file:target/reports"); And a syntax where we set the filename on the endpoint with the fileName URI option. from("direct:report").to("file:target/reports/?fileName=report.txt"); 32.14. Filename Expression Filename can be set either using the expression option or as a string-based File language expression in the CamelFileName header. See the File language for syntax and samples. 32.15. Consuming files from folders where others drop files directly Beware if you consume files from a folder where other applications write files to directly. Take a look at the different readLock options to see what suits your use cases. The best approach is however to write to another folder and after the write move the file in the drop folder. However if you write files directly to the drop folder then the option changed could better detect whether a file is currently being written/copied as it uses a file changed algorithm to see whether the file size / modification changes over a period of time. The other readLock options rely on Java File API that sadly is not always very good at detecting this. You may also want to look at the doneFileName option, which uses a marker file (done file) to signal when a file is done and ready to be consumed. 32.16. Using done files See also section writing done files below. If you want only to consume files when a done file exists, then you can use the doneFileName option on the endpoint. from("file:bar?doneFileName=done"); Will only consume files from the bar folder, if a done file exists in the same directory as the target files. Camel will automatically delete the done file when it's done consuming the files. Camel does not delete automatically the done file if noop=true is configured. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. The consumer only supports the static part of the done file name as either prefix or suffix (not both). from("file:bar?doneFileName=USD{file:name}.done"); In this example only files will be polled if there exists a done file with the name file name .done. For example hello.txt - is the file to be consumed hello.txt.done - is the associated done file You can also use a prefix for the done file, such as: from("file:bar?doneFileName=ready-USD{file:name}"); hello.txt - is the file to be consumed ready-hello.txt - is the associated done file 32.17. Writing done files After you have written a file you may want to write an additional done file as a kind of marker, to indicate to others that the file is finished and has been written. To do that you can use the doneFileName option on the file producer endpoint. .to("file:bar?doneFileName=done"); Will simply create a file named done in the same directory as the target file. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. .to("file:bar?doneFileName=done-USD{file:name}"); Will for example create a file named done-foo.txt if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name}.done"); Will for example create a file named foo.txt.done if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name.noext}.done"); Will for example create a file named foo.done if the target file was foo.txt in the same directory as the target file. 32.18. Samples 32.18.1. Read from a directory and write to another directory from("file://inputdir/?delete=true").to("file://outputdir") 32.18.2. Read from a directory and write to another directory using a overrule dynamic name from("file://inputdir/?delete=true").to("file://outputdir?overruleFile=copy-of-USD{file:name}") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . 32.18.3. Reading recursively from a directory and writing to another from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . Will scan recursively into sub-directories. Will lay out the files in the same directory structure in the outputdir as the inputdir , including any sub-directories. inputdir/foo.txt inputdir/sub/bar.txt Will result in the following output layout: 32.19. Using flatten If you want to store the files in the outputdir directory in the same directory, disregarding the source directory layout (e.g. to flatten out the path), you just add the flatten=true option on the file producer side: from("file://inputdir/?recursive=true&delete=true").to("file://outputdir?flatten=true") Will result in the following output layout: 32.20. Reading from a directory and the default move operation Camel will by default move any processed file into a .camel subdirectory in the directory the file was consumed from. from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Affects the layout as follows: before after 32.21. Read from a directory and process the message in java from("file://inputdir/").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } }); The body will be a File object that points to the file that was just dropped into the inputdir directory. 32.22. Writing to files Camel is of course also able to write files, i.e. produce files. In the sample below we receive some reports on the SEDA queue that we process before they are being written to a directory. 32.22.1. Write to subdirectory using Exchange.FILE_NAME Using a single route, it is possible to write a file to any number of subdirectories. If you have a route setup as such: <route> <from uri="bean:myBean"/> <to uri="file:/rootDirectory"/> </route> You can have myBean set the header Exchange.FILE_NAME to values such as: This allows you to have a single route to write files to multiple destinations. 32.22.2. Writing file through the temporary directory relative to the final destination Sometime you need to temporarily write the files to some directory relative to the destination directory. Such situation usually happens when some external process with limited filtering capabilities is reading from the directory you are writing to. In the example below files will be written to the /var/myapp/filesInProgress directory and after data transfer is done, they will be atomically moved to the` /var/myapp/finalDirectory `directory. from("direct:start"). to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/"); 32.23. Using expression for filenames In this sample we want to move consumed files to a backup folder using today's date as a sub-folder name: from("file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}").to("..."); See File language for more samples. 32.24. Avoiding reading the same file more than once (idempotent consumer) Camel supports Idempotent Consumer directly within the component so it will skip already processed files. This feature can be enabled by setting the idempotent=true option. from("file://inbox?idempotent=true").to("..."); Camel uses the absolute file name as the idempotent key, to detect duplicate files. You can customize this key by using an expression in the idempotentKey option. For example to use both the name and the file size as the key <route> <from uri="file://inbox?idempotent=true&idempotentKey=USD{file:name}-USD{file:size}"/> <to uri="bean:processInbox"/> </route> By default Camel uses a in memory based store for keeping track of consumed files, it uses a least recently used cache holding up to 1000 entries. You can plugin your own implementation of this store by using the idempotentRepository option using the # sign in the value to indicate it's a referring to a bean in the Registry with the specified id . <!-- define our store as a plain spring bean --> <bean id="myStore" class="com.mycompany.MyIdempotentStore"/> <route> <from uri="file://inbox?idempotent=true&idempotentRepository=#myStore"/> <to uri="bean:processInbox"/> </route> Camel will log at DEBUG level if it skips a file because it has been consumed before: DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\idempotent\report.txt 32.25. Using a file based idempotent repository In this section we will use the file based idempotent repository org.apache.camel.processor.idempotent.FileIdempotentRepository instead of the in-memory based that is used as default. This repository uses a 1st level cache to avoid reading the file repository. It will only use the file repository to store the content of the 1st level cache. Thereby the repository can survive server restarts. It will load the content of the file into the 1st level cache upon startup. The file structure is very simple as it stores the key in separate lines in the file. By default, the file store has a size limit of 1mb. When the file grows larger Camel will truncate the file store, rebuilding the content by flushing the 1st level cache into a fresh empty file. We configure our repository using Spring XML creating our file idempotent repository and define our file consumer to use our repository with the idempotentRepository using # sign to indicate Registry lookup: 32.26. Using a JPA based idempotent repository In this section we will use the JPA based idempotent repository instead of the in-memory based that is used as default. First we need a persistence-unit in META-INF/persistence.xml where we need to use the class org.apache.camel.processor.idempotent.jpa.MessageProcessed as model. <persistence-unit name="idempotentDb" transaction-type="RESOURCE_LOCAL"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name="openjpa.ConnectionURL" value="jdbc:derby:target/idempotentTest;create=true"/> <property name="openjpa.ConnectionDriverName" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema"/> <property name="openjpa.Log" value="DefaultLevel=WARN, Tool=INFO"/> <property name="openjpa.Multithreaded" value="true"/> </properties> </persistence-unit> , we can create our JPA idempotent repository in the spring XML file as well: <!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id="jpaStore" class="org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index="0" ref="entityManagerFactory"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index="1" value="FileConsumer"/> </bean> And yes then we just need to refer to the jpaStore bean in the file consumer endpoint using the idempotentRepository using the # syntax option: <route> <from uri="file://inbox?idempotent=true&idempotentRepository=#jpaStore"/> <to uri="bean:processInbox"/> </route> 32.27. Filter using org.apache.camel.component.file.GenericFileFilter Camel supports pluggable filtering strategies. You can then configure the endpoint with such a filter to skip certain files being processed. In the sample we have built our own filter that skips files starting with skip in the filename: And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file: <!-- define our filter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="file://inbox?filter=#myFilter"/> <to uri="bean:processInbox"/> </route> 32.28. Filtering using ANT path matcher The ANT path matcher is based on AntPathMatcher . The file paths is matched with the following rules: ? matches one character * matches zero or more characters ** matches zero or more directories in a path The antInclude and antExclude options make it easy to specify ANT style include/exclude without having to define the filter. See the URI options above for more information. The sample below demonstrates how to use it: 32.28.1. Sorting using Comparator Camel supports pluggable sorting strategies. This strategy it to use the build in java.util.Comparator in Java. You can then configure the endpoint with such a comparator and have Camel sort the files before being processed. In the sample we have built our own comparator that just sorts by file name: And then we can configure our route using the sorter option to reference to our sorter ( mySorter ) we have defined in the spring XML file: <!-- define our sorter as a plain spring bean --> <bean id="mySorter" class="com.mycompany.MyFileSorter"/> <route> <from uri="file://inbox?sorter=#mySorter"/> <to uri="bean:processInbox"/> </route> Note URI options can reference beans using the # syntax In the Spring DSL route above notice that we can refer to beans in the Registry by prefixing the id with #. So writing sorter=#mySorter , will instruct Camel to go look in the Registry for a bean with the ID, mySorter . 32.28.2. Sorting using sortBy Camel supports pluggable sorting strategies. This strategy it to use the File language to configure the sorting. The sortBy option is configured as follows: sortBy=group 1;group 2;group 3;... Where each group is separated with semi colon. In the simple situations you just use one group, so a simple example could be: This will sort by file name, you can reverse the order by prefixing reverse: to the group, so the sorting is now Z..A: As we have the full power of File language we can use some of the other parameters, so if we want to sort by file size we do: You can configure to ignore the case, using ignoreCase: for string comparison, so if you want to use file name sorting but to ignore the case then we do: You can combine ignore case and reverse, however reverse must be specified first: In the sample below we want to sort by last modified file, so we do: And then we want to group by name as a 2nd option so files with same modifcation is sorted by name: Now there is an issue here, can you spot it? Well the modified timestamp of the file is too fine as it will be in milliseconds, but what if we want to sort by date only and then subgroup by name? Well as we have the true power of File language we can use its date command that supports patterns. So this can be solved as: Yeah, that is pretty powerful, oh by the way you can also use reverse per group, so we could reverse the file names: 32.29. Using GenericFileProcessStrategy The option processStrategy can be used to use a custom GenericFileProcessStrategy that allows you to implement your own begin , commit and rollback logic. For instance lets assume a system writes a file in a folder you should consume. But you should not start consuming the file before another ready file has been written as well. So by implementing our own GenericFileProcessStrategy we can implement this as: In the begin() method we can test whether the special ready file exists. The begin method returns a boolean to indicate if we can consume the file or not. In the abort() method special logic can be executed in case the begin operation returned false , for example to cleanup resources etc. in the commit() method we can move the actual file and also delete the ready file. 32.30. Using filter The filter option allows you to implement a custom filter in Java code by implementing the org.apache.camel.component.file.GenericFileFilter interface. This interface has an accept method that returns a boolean. Return true to include the file, and false to skip the file. There is a isDirectory method on GenericFile whether the file is a directory. This allows you to filter unwanted directories, to avoid traversing down unwanted directories. For example to skip any directories which starts with "skip" in the name, can be implemented as follows: 32.31. Using bridgeErrorHandler If you want to use the Camel Error Handler to deal with any exception occurring in the file consumer, then you can enable the bridgeErrorHandler option as shown below: // to handle any IOException being thrown onException(IOException.class) .handled(true) .log("IOException occurred due: USD{exception.message}") .transform().simple("Error USD{exception.message}") .to("mock:error"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from("file:target/nospace?bridgeErrorHandler=true") .convertBodyTo(String.class) .to("mock:result"); So all you have to do is to enable this option, and the error handler in the route will take it from there. Important When using bridgeErrorHandler When using bridgeErrorHandler, then interceptors, OnCompletions does not apply. The Exchange is processed directly by the Camel Error Handler, and does not allow prior actions such as interceptors, onCompletion to take action. 32.32. Debug logging This component has log level TRACE that can be helpful if you have problems. 32.33. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.cluster.file.acquire-lock-delay The time to wait before starting to try to acquire lock. String camel.cluster.file.acquire-lock-interval The time to wait between attempts to try to acquire lock. String camel.cluster.file.attributes Custom service attributes. Map camel.cluster.file.enabled Sets if the file cluster service should be enabled or not, default is false. false Boolean camel.cluster.file.id Cluster Service ID. String camel.cluster.file.order Service lookup order/priority. Integer camel.cluster.file.root The root path. String camel.component.file.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.file.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.file.enabled Whether to enable auto configuration of the file component. This is enabled by default. Boolean camel.component.file.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-file-starter</artifactId> </dependency>",
"file:directoryName[?options]",
"file:directoryName",
"from(\"file://inbox?move=.done\").to(\"bean:handleOrder\");",
"move=../backup/copy-of-USD{file:name}",
"from(\"file://inbox?delete=true\").to(\"bean:handleOrder\");",
"from(\"file://inbox?preMove=inprogress\").to(\"bean:handleOrder\");",
"from(\"file://inbox?preMove=inprogress&move=.done\").to(\"bean:handleOrder\");",
"move=backup/USD{date:now:yyyyMMdd}/USD{file:name}",
"from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")",
"from(\"file:inbox?charset=utf-8\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");",
"from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");",
"from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .setProperty(Exchange.CHARSET_NAME, header(\"someCharsetHeader\")) .to(\"file:outbox\");",
"from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")",
"DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8 DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1",
"from(\"direct:report\").to(\"file:target/reports\");",
"from(\"direct:report\").setHeader(Exchange.FILE_NAME, constant(\"report.txt\")).to( \"file:target/reports\");",
"from(\"direct:report\").setHeader(\"CamelFileName\", constant(\"report.txt\")).to( \"file:target/reports\");",
"from(\"direct:report\").to(\"file:target/reports/?fileName=report.txt\");",
"from(\"file:bar?doneFileName=done\");",
"from(\"file:bar?doneFileName=USD{file:name}.done\");",
"from(\"file:bar?doneFileName=ready-USD{file:name}\");",
".to(\"file:bar?doneFileName=done\");",
".to(\"file:bar?doneFileName=done-USD{file:name}\");",
".to(\"file:bar?doneFileName=USD{file:name}.done\");",
".to(\"file:bar?doneFileName=USD{file:name.noext}.done\");",
"from(\"file://inputdir/?delete=true\").to(\"file://outputdir\")",
"from(\"file://inputdir/?delete=true\").to(\"file://outputdir?overruleFile=copy-of-USD{file:name}\")",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")",
"inputdir/foo.txt inputdir/sub/bar.txt",
"outputdir/foo.txt outputdir/sub/bar.txt",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir?flatten=true\")",
"outputdir/foo.txt outputdir/bar.txt",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")",
"inputdir/foo.txt inputdir/sub/bar.txt",
"inputdir/.camel/foo.txt inputdir/sub/.camel/bar.txt outputdir/foo.txt outputdir/sub/bar.txt",
"from(\"file://inputdir/\").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } });",
"<route> <from uri=\"bean:myBean\"/> <to uri=\"file:/rootDirectory\"/> </route>",
"Exchange.FILE_NAME = hello.txt => /rootDirectory/hello.txt Exchange.FILE_NAME = foo/bye.txt => /rootDirectory/foo/bye.txt",
"from(\"direct:start\"). to(\"file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/\");",
"from(\"file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}\").to(\"...\");",
"from(\"file://inbox?idempotent=true\").to(\"...\");",
"<route> <from uri=\"file://inbox?idempotent=true&idempotentKey=USD{file:name}-USD{file:size}\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our store as a plain spring bean --> <bean id=\"myStore\" class=\"com.mycompany.MyIdempotentStore\"/> <route> <from uri=\"file://inbox?idempotent=true&idempotentRepository=#myStore\"/> <to uri=\"bean:processInbox\"/> </route>",
"DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\\idempotent\\report.txt",
"<persistence-unit name=\"idempotentDb\" transaction-type=\"RESOURCE_LOCAL\"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name=\"openjpa.ConnectionURL\" value=\"jdbc:derby:target/idempotentTest;create=true\"/> <property name=\"openjpa.ConnectionDriverName\" value=\"org.apache.derby.jdbc.EmbeddedDriver\"/> <property name=\"openjpa.jdbc.SynchronizeMappings\" value=\"buildSchema\"/> <property name=\"openjpa.Log\" value=\"DefaultLevel=WARN, Tool=INFO\"/> <property name=\"openjpa.Multithreaded\" value=\"true\"/> </properties> </persistence-unit>",
"<!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id=\"jpaStore\" class=\"org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository\"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index=\"0\" ref=\"entityManagerFactory\"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index=\"1\" value=\"FileConsumer\"/> </bean>",
"<route> <from uri=\"file://inbox?idempotent=true&idempotentRepository=#jpaStore\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our filter as a plain spring bean --> <bean id=\"myFilter\" class=\"com.mycompany.MyFileFilter\"/> <route> <from uri=\"file://inbox?filter=#myFilter\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our sorter as a plain spring bean --> <bean id=\"mySorter\" class=\"com.mycompany.MyFileSorter\"/> <route> <from uri=\"file://inbox?sorter=#mySorter\"/> <to uri=\"bean:processInbox\"/> </route>",
"sortBy=group 1;group 2;group 3;",
"sortBy=file:name",
"sortBy=reverse:file:name",
"sortBy=file:length",
"sortBy=ignoreCase:file:name",
"sortBy=reverse:ignoreCase:file:name",
"sortBy=file:modified",
"sortBy=file:modified;file:name",
"sortBy=date:file:yyyyMMdd;file:name",
"sortBy=date:file:yyyyMMdd;reverse:file:name",
"// to handle any IOException being thrown onException(IOException.class) .handled(true) .log(\"IOException occurred due: USD{exception.message}\") .transform().simple(\"Error USD{exception.message}\") .to(\"mock:error\"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from(\"file:target/nospace?bridgeErrorHandler=true\") .convertBodyTo(String.class) .to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-file-component-starter |
Administration Guide | Administration Guide Red Hat Trusted Artifact Signer 1 General administration for the Trusted Artifact Signer service Red Hat Trusted Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/administration_guide/index |
8.209. subscription-manager | 8.209. subscription-manager 8.209.1. RHBA-2013:1659 - subscription-manager and python-rhsm bug fix and enhancement update Updated subscription-manager and python-rhsm packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The python-rhsm packages provide a library for communicating with the representational state transfer (REST) interface of Red Hat's subscription and content service. The Subscription Management tools use this interface to manage system entitlements, certificates, and content access. The subscription-manager packages provide programs and libraries to allow users to manage subscriptions and yum repositories from the Red Hat Entitlement platform. Note The python-rhsm packages have been upgraded to upstream version 1.9.6, which provides a number of bug fixes and enhancements over the version. (BZ# 922837 ) The subscription-manager packages have been upgraded to upstream version 1.9.11, which provides a number of bug fixes and enhancements over the version. (BZ# 950118 ) The subscription-manager-migration-data packages have been upgraded to upstream version 2.0.5, which provides a number of bug fixes and enhancements over the version. (BZ# 950116 ) Bug Fixes BZ# 1000145 Previously, the python-rhsm utility used a deprecated API. Consequently, a deprecation warning message was displayed to the user. With this update, the deprecation warning message is no longer displayed. BZ# 914113 Prior to this update, the rhsmd daemon called the deprecated "hasNow()" function. As a consequence, the "DeprecationWarning: Call to deprecated function: hasNow" warning was displayed to the user. With this update, the "hasNow()" function has been removed and the deprecation warning message is no longer displayed. BZ# 1012566 Prior to this update, the script for the /etc/cron.daily/rhsmd cron job had incorrect permissions. Consequently, even non-root users had execute permissions. This update changes the permissions to the correct "0700" value and only the root user now has execute permissions. BZ# 872697 Previously, the Japanese translation of the "Configure Pro_xy" message contained an excessive underscore character. Consequently, an incorrect text was displayed to the users of ja_JP locale. This update adds the correct message. BZ# 985090 Prior to this update, automatic completion of the "rhsmcertd" command by pressing the "TAB" key twice did not work properly. Consequently, incorrect options were displayed. The tab completion script has been fixed to display correct options. Note that the bash-completion auxiliary package is required for the auto-completion functionality. BZ# 988085 Previously, after running the "subscription-manager config --remove <server.hostname>" command, the "hostname =" line was completely removed from the "rhsm.conf" configuration file. Consequently, the default value of "subscription.rhn.redhat.com" became inaccessible from the command-line interface (CLI). With this update, the "hostname =" line reverts to the expected default value in the described scenario. BZ# 996993 , BZ# 1008557 This update adds two new fields to the output of the "subscription-manager list --available" command. The "Provides" field shows the names of the products that the system is eligible for. The "Suggested" field has been added to facilitate compliance and provide parity with the graphical user interface (GUI). BZ# 869046 Previously, the subscription-manager utility contained only general error messages when a connection to a proxy failed. As a consequence, users received an uninformative error message when they tried to access an incorrect proxy server, tried to connect via an incorrect proxy port, or failed to enter the correct password. This update adds more informative error messages for the described cases. BZ# 1001820 Prior to this update, automatic completion of the "subscription-manager attach" subcommand by pressing the "TAB" key twice did not work properly. As a consequence, incorrect options were displayed. The tab completion script has been fixed to display correct options. Note that the bash-completion auxiliary package is required for the auto-completion functionality. BZ# 1004385 Previously, automatic completion of the "rhsm-icon" command by pressing the "TAB" key twice did not work properly. Consequently, options were displayed with a comma at the end. The tab completion script has been fixed to display correct options. Note that the bash-completion auxiliary package is required for the auto-completion functionality. BZ# 1004893 Under certain circumstances, the "subscription-manager list --installed" command returned an incorrect status. Consequently, when a new product certificate contained a new product, the displayed status of the newly available product was "Not Subscribed". This bug has been fixed and the displayed status for the newly available product is now "Subscribed" in the described scenario. BZ# 1011234 Under certain circumstances, the "subscription-manager list --available" command returned an incorrect value. Consequently, for subscription pools whose Service Level had not been set, misleading "None" was displayed. This bug has been fixed and an empty string is now displayed in this scenario. BZ# 1006985 Prior to this update, the subscription-manager-migration script did not work properly when migrating different product certificates with the same product ID. As a consequence, the certificates were installed under the same name and were unusable. This bug has been fixed and the migration is aborted when different product certificates with the same ID are detected. BZ# 1008603 Previously, the subscription-manager utility required connectivity to the "subscription.rhn.stage.redhat.com" site in order to list products. Consequently, the product list was not displayed when the connection failed. This bug has been fixed and users are now able to list products from the local cache. Enhancements BZ# 909778 This update adds the "--proxy" option to the "subscription-manager repos --list" subcommand. The the user is now able set the proxy when connecting to the candlepin server. BZ# 983670 The description displayed when using the "--help" option with the "subscription-manager auto-attach" subcommand has been improved to be more precise. BZ# 986971 The "Available Subscriptions" header in the Subscriptions table has been simplified to just "Available", which saves space and is clearer to the user. BZ# 1011961 With this update, the displayed quantity in the Entitlement Certificate has been changed from the confusing "-1" to the correct "Unlimited". BZ# 994620 This update provides a more precise tooltip messaging for the rhsm-icon utility. Now, when a partial subscription exists on a fully compliant machine, the message says "Partially entitled subscriptions" instead of the "Partially entitled products". BZ# 1004341 This update adds support for automatic completion of the "subscription-manager-gui" command options by pressing the "TAB" key twice. Note that the bash-completion auxiliary package is required for the auto-completion functionality. BZ# 1008016 With this update, the subscription-manager utility generates the /etc/yum.repos.d/redhat.repo repository immediately after a successful subscription, no more steps are necessary. BZ# 1009600 When the "subscription-manager list --consumed" command is run, the output now displays "System Type: Physical/Virtual". This allows the user to determine whether the granted entitlement was virtual. Users of subscription-manager and python-rhsm are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/subscription-manager-and-python-rhsm |
Chapter 4. Deploying Red Hat Quay on infrastructure nodes | Chapter 4. Deploying Red Hat Quay on infrastructure nodes By default, Quay related pods are placed on arbitrary worker nodes when using the Red Hat Quay Operator to deploy the registry. For more information about how to use machine sets to configure nodes to only host infrastructure components, see Creating infrastructure machine sets . If you are not using OpenShift Container Platform machine set resources to deploy infra nodes, the section in this document shows you how to manually label and taint nodes for infrastructure purposes. After you have configured your infrastructure nodes either manually or use machines sets, you can control the placement of Quay pods on these nodes using node selectors and tolerations. 4.1. Labeling and tainting nodes for infrastructure use Use the following procedure to label and tain nodes for infrastructure use. Enter the following command to reveal the master and worker nodes. In this example, there are three master nodes and six worker nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 Enter the following commands to label the three worker nodes for infrastructure use: USD oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra= Now, when listing the nodes in the cluster, the last three worker nodes have the infra role. For example: USD oc get nodes Example NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558 When a worker node is assigned the infra role, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. For example: USD oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule 4.2. Creating a project with node selector and tolerations Use the following procedure to create a project with node selector and tolerations. Note The following procedure can also be completed by removing the installed Red Hat Quay Operator and the namespace, or namespaces, used when creating the deployment. Users can then create a new resource with the following annotation. Procedure Enter the following command to edit the namespace where Red Hat Quay is deployed, and the following annotation: USD oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra=' Example output namespace/<namespace> annotated Obtain a list of available pods by entering the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5744dd64c9-9d5jt 1/1 Running 0 173m 10.130.4.13 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-clair-app-5744dd64c9-fg86n 1/1 Running 6 (3h21m ago) 3h24m 10.131.0.91 stevsmit-quay-ocp-tes-5gwws-worker-c-dnhdp <none> <none> example-registry-clair-postgres-845b47cd88-vdchz 1/1 Running 0 3h21m 10.130.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-quay-app-64cbc5bcf-8zvgc 1/1 Running 1 (3h24m ago) 3h24m 10.130.2.12 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-app-64cbc5bcf-pvlz6 1/1 Running 0 3h24m 10.129.4.10 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-app-upgrade-8gspn 0/1 Completed 0 3h24m 10.130.2.10 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-database-784d78b6f8-2vkml 1/1 Running 0 3h24m 10.131.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-2frtg <none> <none> example-registry-quay-mirror-d5874d8dc-fmknp 1/1 Running 0 3h24m 10.129.4.9 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-mirror-d5874d8dc-t4mff 1/1 Running 0 3h24m 10.129.2.19 stevsmit-quay-ocp-tes-5gwws-worker-a-k7w86 <none> <none> example-registry-quay-redis-79848898cb-6qf5x 1/1 Running 0 3h24m 10.130.2.11 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> Enter the following command to delete the available pods: USD oc delete pods --selector quay-operator/quayregistry=example-registry -n quay-enterprise Example output pod "example-registry-clair-app-5744dd64c9-9d5jt" deleted pod "example-registry-clair-app-5744dd64c9-fg86n" deleted pod "example-registry-clair-postgres-845b47cd88-vdchz" deleted pod "example-registry-quay-app-64cbc5bcf-8zvgc" deleted pod "example-registry-quay-app-64cbc5bcf-pvlz6" deleted pod "example-registry-quay-app-upgrade-8gspn" deleted pod "example-registry-quay-database-784d78b6f8-2vkml" deleted pod "example-registry-quay-mirror-d5874d8dc-fmknp" deleted pod "example-registry-quay-mirror-d5874d8dc-t4mff" deleted pod "example-registry-quay-redis-79848898cb-6qf5x" deleted After the pods have been deleted, they automatically cycle back up and should be scheduled on the dedicated infrastructure nodes. 4.3. Installing Red Hat Quay on OpenShift Container Platform on a specific namespace Use the following procedure to install Red Hat Quay on OpenShift Container Platform in a specific namespace. To install the Red Hat Quay Operator in a specific namespace, you must explicitly specify the appropriate project namespace, as in the following command. In the following example, the quay-registry namespace is used. This results in the quay-operator pod landing on one of the three infrastructure nodes. For example: USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 4.4. Creating the Red Hat Quay registry Use the following procedure to create the Red Hat Quay registry. Enter the following command to create the Red Hat Quay registry. Then, wait for the deployment to be marked as ready . In the following example, you should see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes. USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 4.5. Resizing Managed Storage When deploying Red Hat Quay on OpenShift Container Platform, three distinct persistent volume claims (PVCs) are deployed: One for the PostgreSQL 13 registry. One for the Clair PostgreSQL 13 registry. One that uses NooBaa as a backend storage. Note The connection between Red Hat Quay and NooBaa is done through the S3 API and ObjectBucketClaim API in OpenShift Container Platform. Red Hat Quay leverages that API group to create a bucket in NooBaa, obtain access keys, and automatically set everything up. On the backend, or NooBaa, side, that bucket is creating inside of the backing store. As a result, NooBaa PVCs are not mounted or connected to Red Hat Quay pods. The default size for the PostgreSQL 13 and Clair PostgreSQL 13 PVCs is set to 50 GiB. You can expand storage for these PVCs on the OpenShift Container Platform console by using the following procedure. Note The following procedure shares commonality with Expanding Persistent Volume Claims on Red Hat OpenShift Data Foundation. 4.5.1. Resizing PostgreSQL 13 PVCs on Red Hat Quay Use the following procedure to resize the PostgreSQL 13 and Clair PostgreSQL 13 PVCs. Prerequisites You have cluster admin privileges on OpenShift Container Platform. Procedure Log into the OpenShift Container Platform console and select Storage Persistent Volume Claims . Select the desired PersistentVolumeClaim for either PostgreSQL 13 or Clair PostgreSQL 13, for example, example-registry-quay-postgres-13 . From the Action menu, select Expand PVC . Enter the new size of the Persistent Volume Claim and select Expand . After a few minutes, the expanded size should reflect in the PVC's Capacity field. 4.6. Customizing Default Operator Images Note Currently, customizing default Operator images is not supported on IBM Power and IBM Z. In certain circumstances, it might be useful to override the default images used by the Red Hat Quay Operator. This can be done by setting one or more environment variables in the Red Hat Quay Operator ClusterServiceVersion . Important Using this mechanism is not supported for production Red Hat Quay environments and is strongly encouraged only for development or testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator. 4.6.1. Environment Variables The following environment variables are used in the Red Hat Quay Operator to override component images: Environment Variable Component RELATED_IMAGE_COMPONENT_QUAY base RELATED_IMAGE_COMPONENT_CLAIR clair RELATED_IMAGE_COMPONENT_POSTGRES postgres and clair databases RELATED_IMAGE_COMPONENT_REDIS redis Note Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest). 4.6.2. Applying overrides to a running Operator When the Red Hat Quay Operator is installed in a cluster through the Operator Lifecycle Manager (OLM) , the managed component container images can be easily overridden by modifying the ClusterServiceVersion object. Use the following procedure to apply overrides to a running Red Hat Quay Operator. Procedure The ClusterServiceVersion object is Operator Lifecycle Manager's representation of a running Operator in the cluster. Find the Red Hat Quay Operator's ClusterServiceVersion by using a Kubernetes UI or the kubectl / oc CLI tool. For example: USD oc get clusterserviceversions -n <your-namespace> Using the UI, oc edit , or another method, modify the Red Hat Quay ClusterServiceVersion to include the environment variables outlined above to point to the override images: JSONPath : spec.install.spec.deployments[0].spec.template.spec.containers[0].env - name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542 Note This is done at the Operator level, so every QuayRegistry will be deployed using these same overrides. 4.7. AWS S3 CloudFront Note Currently, using AWS S3 CloudFront is not supported on IBM Power and IBM Z. Use the following procedure if you are using AWS S3 Cloudfront for your backend registry storage. Procedure Enter the following command to specify the registry key: USD oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583",
"oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra=",
"oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra=",
"oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558",
"oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule",
"oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra='",
"namespace/<namespace> annotated",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5744dd64c9-9d5jt 1/1 Running 0 173m 10.130.4.13 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-clair-app-5744dd64c9-fg86n 1/1 Running 6 (3h21m ago) 3h24m 10.131.0.91 stevsmit-quay-ocp-tes-5gwws-worker-c-dnhdp <none> <none> example-registry-clair-postgres-845b47cd88-vdchz 1/1 Running 0 3h21m 10.130.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-6xkn7 <none> <none> example-registry-quay-app-64cbc5bcf-8zvgc 1/1 Running 1 (3h24m ago) 3h24m 10.130.2.12 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-app-64cbc5bcf-pvlz6 1/1 Running 0 3h24m 10.129.4.10 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-app-upgrade-8gspn 0/1 Completed 0 3h24m 10.130.2.10 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none> example-registry-quay-database-784d78b6f8-2vkml 1/1 Running 0 3h24m 10.131.4.10 stevsmit-quay-ocp-tes-5gwws-worker-c-2frtg <none> <none> example-registry-quay-mirror-d5874d8dc-fmknp 1/1 Running 0 3h24m 10.129.4.9 stevsmit-quay-ocp-tes-5gwws-worker-b-fjhz4 <none> <none> example-registry-quay-mirror-d5874d8dc-t4mff 1/1 Running 0 3h24m 10.129.2.19 stevsmit-quay-ocp-tes-5gwws-worker-a-k7w86 <none> <none> example-registry-quay-redis-79848898cb-6qf5x 1/1 Running 0 3h24m 10.130.2.11 stevsmit-quay-ocp-tes-5gwws-worker-a-tk8dx <none> <none>",
"oc delete pods --selector quay-operator/quayregistry=example-registry -n quay-enterprise",
"pod \"example-registry-clair-app-5744dd64c9-9d5jt\" deleted pod \"example-registry-clair-app-5744dd64c9-fg86n\" deleted pod \"example-registry-clair-postgres-845b47cd88-vdchz\" deleted pod \"example-registry-quay-app-64cbc5bcf-8zvgc\" deleted pod \"example-registry-quay-app-64cbc5bcf-pvlz6\" deleted pod \"example-registry-quay-app-upgrade-8gspn\" deleted pod \"example-registry-quay-database-784d78b6f8-2vkml\" deleted pod \"example-registry-quay-mirror-d5874d8dc-fmknp\" deleted pod \"example-registry-quay-mirror-d5874d8dc-t4mff\" deleted pod \"example-registry-quay-redis-79848898cb-6qf5x\" deleted",
"oc get pods -n quay-registry -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal",
"oc get pods -n quay-registry -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal",
"oc get clusterserviceversions -n <your-namespace>",
"- name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542",
"oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/operator-deploy-infrastructure |
Chapter 1. Overview | Chapter 1. Overview AMQ Broker is a high-performance messaging implementation based on ActiveMQ Artemis. It has fast, journal-based message persistence and supports multiple languages, protocols, and platforms. AMQ Broker provides multiple interfaces for managing and interacting with your broker instances, such as a management console, management APIs, and a command-line interface. In addition, you can monitor broker performance by collecting runtime metrics, configure brokers to proactively monitor for problems such as deadlock conditions, and interactively check the health of brokers and queues. This guide provides detailed information about typical broker management tasks such as: Upgrading your broker instances Using the command-line interface and management API Checking the health of brokers and queues Collecting broker runtime metrics Proactively monitoring critical broker operations 1.1. Supported configurations Refer to the article " Red Hat AMQ 7 Supported Configurations " on the Red Hat Customer Portal for current information regarding AMQ Broker supported configurations. 1.2. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see Managing sudo access . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <install_dir> with your own directory name. USD <install_dir> /bin/artemis create mybroker | [
"<install_dir> /bin/artemis create mybroker"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/managing_amq_broker/assembly-br-managing-overview-managing |
Chapter 2. Upgrading the Red Hat Quay Operator Overview | Chapter 2. Upgrading the Red Hat Quay Operator Overview Note Currently, upgrading the Red Hat Quay Operator is not supported on IBM Power and IBM Z. The Red Hat Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Red Hat Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Red Hat Quay to deploy ; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Red Hat Quay on Kubernetes. 2.1. Operator Lifecycle Manager The Red Hat Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM) . When creating a Subscription with the default approvalStrategy: Automatic , OLM will automatically upgrade the Red Hat Quay Operator whenever a new version becomes available. Warning When the Red Hat Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the OperatorHub page for the Red Hat Quay Operator during installation. It can also be found in the Red Hat Quay Operator Subscription object by the approvalStrategy field. Choosing Automatic means that your Red Hat Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected. 2.2. Upgrading the Red Hat Quay Operator The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators . In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.10: 3.7.z 3.10.z 3.8.z 3.10.z 3.9.z 3.10.z For users on standalone deployments of Red Hat Quay wanting to upgrade to 3.9, see the Standalone upgrade guide. 2.2.1. Upgrading Red Hat Quay To update Red Hat Quay from one minor version to the , for example, 3.9 3.10, you must change the update channel for the Red Hat Quay Operator. For z stream upgrades, for example, 3.9.1 3.9.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic , the Red Hat Quay Operator upgrades automatically to the newest z stream. This results in automatic, rolling Red Hat Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. 2.3. Removing config editor objects on Red Hat Quay Operator The config editor has been removed from the Red Hat Quay Operator on OpenShift Container Platform deployments. As a result, the quay-config-editor pod no longer deploys, and users cannot check the status of the config editor route. Additionally, the Config Editor Endpoint no longer generates on the Red Hat Quay Operator Details page. Users with existing Red Hat Quay Operators who are upgrading from 3.7, 3.8, or 3.9 to 3.10 must manually remove the Red Hat Quay config editor by removing the pod , deployment , route, service , and secret objects. To remove the deployment , route, service , and secret objects, use the following procedure. Prerequisites You have deployed Red Hat Quay version 3.7, 3.8, or 3.9. You have a valid QuayRegistry object. Procedure Obtain the quayregistry-quay-config-editor route object by entering the following command: USD oc get route Example output --- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m --- Remove the quayregistry-quay-config-editor route object by entering the following command: USD oc delete route quayregistry-quay-config-editor Obtain the quayregistry-quay-config-editor deployment object by entering the following command: USD oc get deployment Example output --- quayregistry-quay-config-editor --- Remove the quayregistry-quay-config-editor deployment object by entering the following command: USD oc delete deployment quayregistry-quay-config-editor Obtain the quayregistry-quay-config-editor service object by entering the following command: USD oc get svc | grep config-editor Example output quayregistry-quay-config-editor ClusterIP 172.30.219.194 <none> 80/TCP 6h15m Remove the quayregistry-quay-config-editor service object by entering the following command: USD oc delete service quayregistry-quay-config-editor Obtain the quayregistry-quay-config-editor-credentials secret by entering the following command: USD oc get secret | grep config-editor Example output quayregistry-quay-config-editor-credentials-mb8kchfg92 Opaque 2 52m Delete the quayregistry-quay-config-editor-credentials secret by entering the following command: USD oc delete secret quayregistry-quay-config-editor-credentials-mb8kchfg92 Obtain the quayregistry-quay-config-editor pod by entering the following command: USD USD oc get pod Example output --- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m --- Delete the quayregistry-quay-config-editor pod by entering the following command: USD oc delete pod quayregistry-quay-app-6bc4fbd456-8bc9c 2.3.1. Upgrading with custom SSL/TLS certificate/key pairs without Subject Alternative Names There is an issue for customers using their own SSL/TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Red Hat Quay 3.3.4 to Red Hat Quay 3.6 directly. During the upgrade to Red Hat Quay 3.6, the deployment is blocked, with the error message from the Red Hat Quay Operator pod logs indicating that the Red Hat Quay SSL/TLS certificate must have SANs. If possible, you should regenerate your SSL/TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app , quay-upgrade and quay-config-editor pods after upgrade to enable CommonName matching: The GODEBUG=x509ignoreCN=0 flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a hostname when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment. 2.3.2. Changing the update channel for the Red Hat Quay Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Red Hat Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Red Hat Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. 2.3.3. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Red Hat Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Red Hat Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade. The following image shows the Subscription tab in the UI, including the update Channel , the Approval strategy, the Upgrade status and the InstallPlan : The list of Installed Operators provides a high-level summary of the current Quay installation: 2.4. Upgrading a QuayRegistry resource When the Red Hat Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: If status.currentVersion is unset, reconcile as normal. If status.currentVersion equals the Operator version, reconcile as normal. If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator's version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone. 2.5. Upgrading a QuayEcosystem Upgrades are supported from versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry , use the following procedure. Procedure Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem . USD oc edit quayecosystem <quayecosystemname> metadata: labels: quay-operator/migrate: "true" Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem . The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true" . After the status.registryEndpoint of the new QuayRegistry is set, access Red Hat Quay and confirm that all data and settings were migrated successfully. If everything works correctly, you can delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources. 2.5.1. Reverting QuayEcosystem Upgrade If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry , follow these steps to revert back to using the QuayEcosystem : Procedure Delete the QuayRegistry using either the UI or kubectl : USD kubectl delete -n <namespace> quayregistry <quayecosystem-name> If external access was provided using a Route , change the Route to point back to the original Service using the UI or kubectl . Note If your QuayEcosystem was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but Red Hat Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. 2.5.2. Supported QuayEcosystem Configurations for Upgrades The Red Hat Quay Operator reports errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Red Hat Quay's config.yaml file. Database Ephemeral database not supported ( volumeSize field must be set). Redis Nothing special needed. External Access Only passthrough Route access is supported for automatic migration. Manual migration required for other methods. LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true" , delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app . Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service ; the Quay Operator will not manage it. LoadBalancer / NodePort / Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app . Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service . Clair Nothing special needed. Object Storage QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. Repository Mirroring Nothing special needed. | [
"oc get route",
"--- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m ---",
"oc delete route quayregistry-quay-config-editor",
"oc get deployment",
"--- quayregistry-quay-config-editor ---",
"oc delete deployment quayregistry-quay-config-editor",
"oc get svc | grep config-editor",
"quayregistry-quay-config-editor ClusterIP 172.30.219.194 <none> 80/TCP 6h15m",
"oc delete service quayregistry-quay-config-editor",
"oc get secret | grep config-editor",
"quayregistry-quay-config-editor-credentials-mb8kchfg92 Opaque 2 52m",
"oc delete secret quayregistry-quay-config-editor-credentials-mb8kchfg92",
"USD oc get pod",
"--- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m ---",
"oc delete pod quayregistry-quay-app-6bc4fbd456-8bc9c",
"GODEBUG=x509ignoreCN=0",
"oc edit quayecosystem <quayecosystemname>",
"metadata: labels: quay-operator/migrate: \"true\"",
"kubectl delete -n <namespace> quayregistry <quayecosystem-name>"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/upgrade_red_hat_quay/operator-upgrade |
Appendix A. Using your subscription | Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-05-30 17:23:09 UTC | [
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_3scale_api_management_with_the_streams_for_apache_kafka_bridge/using_your_subscription |
Chapter 9. Namespace auto-pruning architecture | Chapter 9. Namespace auto-pruning architecture For the namespace auto-pruning feature, two distinct database tables within a database schema were created: one for namespaceautoprunepolicy and another for autoprunetaskstatus . An auto-prune worker carries out the configured policies. Namespace auto prune policy database table The namespaceautoprunepolicy database table holds the policy configuration for a single namespace. There is only one entry per namespace, but there is support for multiple rows per namespace_id . The policy field holds the policy details, such as {method: "creation_date", olderThan: "2w"} or {method: "number_of_tags", numTags: 100} . Table 9.1. namespaceautoprunepolicy database table Field Type Attributes Description uuid character varying (225) Unique, indexed Unique identifier for this policy namespace_id Integer Foreign Key Namespace that the policy falls under policy text JSON Policy configuration Auto-prune task status database table The autoprunetaskstatus table registers tasks to be executed by the auto-prune worker. Tasks are executed within the context of a single namespace. Only one task per namespace exists. Table 9.2. autoprunetaskstatus database table Field Type Attributes Description namespace_id Integer Foreign Key Namespace that this task belongs to last_ran_ms Big Integer (bigint) Nullable, indexed Last time that the worker executed the policies for this namespace status text Nullable Details from the last execution task 9.1. Auto-prune worker The following sections detail information about the auto-prune worker. 9.1.1. Auto-prune-task-creation When a new policy is created in the namespaceautoprunepolicy database table, a row is also created in the autoprunetask table. This is done in the same transaction. The auto-prune worker uses the entry in the autoprunetask table to identify which namespace it should execute policies for. 9.1.2. Auto-prune worker execution The auto-pruning worker is an asynchronous job that executes configured policies. Its workflow is based on values in the autoprunetask table. When a task begins, the following occurs: The auto-prune worker starts on a set interval, which defaults at 30 seconds. The auto-prune worker selects a row from autoprunetask with the least, or null, last_ran_ms and FOR UPDATE SKIP LOCKED . A null last_ran_ms indicates that the task was never ran. A task that hasn't been ran in he longest amount of time, or has never been run at all, is prioritized. The auto-prune worker obtains the policy configuration from the namespaceautoprunepolicy table. If no policy configuration exists, the entry from autoprunetask is deleted for this namespace and the procedure stops immediately. The auto-prune worker begins a paginated loop of all repositories under the organization. The auto-prune worker determines much pruning method to use based on policy.method . The auto-prune worker executes the pruning method with the policy configuration retrieved earlier. For pruning by the number of tags: the auto-pruner worker gets the number of currently active tags sorted by creation date, and deletes the older tags to the configured number. For pruning by date: the auto-pruner worker gets the active tags older than the specified time span and any tags returned are deleted. The auto-prune worker adds audit logs of the tags deleted. The last_ran_ms gets updated after a row from autoprunetask is selected. The auto-prune worker ends. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_architecture/namespace-auto-pruning-arch |
Appendix A. Technical users provided and required by Satellite | Appendix A. Technical users provided and required by Satellite During the installation of Satellite, system accounts are created. They are used to manage files and process ownership of the components integrated into Satellite. Some of these accounts have fixed UIDs and GIDs, while others take the available UID and GID on the system instead. To control the UIDs and GIDs assigned to accounts, you can define accounts before installing Satellite. Because some of the accounts have hard-coded UIDs and GIDs, it is not possible to do this with all accounts created during Satellite installation. The following table lists all the accounts created by Satellite during installation. You can predefine accounts that have Yes in the Flexible UID and GID column with custom UID and GID before installing Satellite. Do not change the home and shell directories of system accounts because they are requirements for Satellite to work correctly. Because of potential conflicts with local users that Satellite creates, you cannot use external identity providers for the system users of the Satellite base operating system. Table A.1. Technical users provided and required by Satellite User name UID Group name GID Flexible UID and GID Home Shell foreman N/A foreman N/A yes /usr/share/foreman /sbin/nologin foreman-proxy N/A foreman-proxy N/A yes /usr/share/foreman-proxy /sbin/nologin apache 48 apache 48 no /usr/share/httpd /sbin/nologin postgres 26 postgres 26 no /var/lib/pgsql /bin/bash pulp N/A pulp N/A no N/A /sbin/nologin puppet 52 puppet 52 no /opt/puppetlabs/server/data/puppetserver /sbin/nologin saslauth N/A saslauth 76 no /run/saslauthd /sbin/nologin tomcat 53 tomcat 53 no /usr/share/tomcat /bin/nologin unbound N/A unbound N/A yes /etc/unbound /sbin/nologin | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/chap-documentation-architecture_guide-required_technical_users |
Chapter 4. Debugging Serverless applications | Chapter 4. Debugging Serverless applications You can use a variety of methods to troubleshoot a Serverless application. 4.1. Checking terminal output You can check your deploy command output to see whether deployment succeeded or not. If your deployment process was terminated, you should see an error message in the output that describes the reason why the deployment failed. This kind of failure is most likely due to either a misconfigured manifest or an invalid command. Procedure Open the command output on the client where you deploy and manage your application. The following example is an error that you might see after a failed oc apply command: Error from server (InternalError): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"serving.knative.dev/v1\",\"kind\":\"Route\",\"metadata\":{\"annotations\":{},\"name\":\"route-example\",\"namespace\":\"default\"},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}}\n"}},"spec":{"traffic":[{"configurationName":"configuration-example","percent":50}]}} to: &{0xc421d98240 0xc421e77490 default route-example STDIN 0xc421db0488 264682 false} for: "STDIN": Internal error occurred: admission webhook "webhook.knative.dev" denied the request: mutation failed: The route must have traffic percent sum equal to 100. ERROR: Non-zero return code '1' from command: Process exited with status 1 This output indicates that you must configure the route traffic percent to be equal to 100. 4.2. Checking pod status You might need to check the status of your Pod object to identify the issue with your Serverless application. Procedure List all pods for your deployment by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE configuration-example-00001-deployment-659747ff99-9bvr4 2/2 Running 0 3h configuration-example-00002-deployment-5f475b7849-gxcht 1/2 CrashLoopBackOff 2 36s In the output, you can see all pods with selected data about their status. View the detailed information on the status of a pod by running the following command: Example output USD oc get pod <pod_name> --output yaml In the output, the conditions and containerStatuses fields might be particularly useful for debugging. 4.3. Checking revision status You might need to check the status of your revision to identify the issue with your Serverless application. Procedure If you configure your route with a Configuration object, get the name of the Revision object created for your deployment by running the following command: USD oc get configuration <configuration_name> --output jsonpath="{.status.latestCreatedRevisionName}" You can find the configuration name in the Route.yaml file, which specifies routing settings by defining an OpenShift Route resource. If you configure your route with revision directly, look up the revision name in the Route.yaml file. Query for the status of the revision by running the following command: USD oc get revision <revision-name> --output yaml A ready revision should have the reason: ServiceReady , status: "True" , and type: Ready conditions in its status. If these conditions are present, you might want to check pod status or Istio routing. Otherwise, the resource status contains the error message. 4.3.1. Additional resources Route configuration 4.4. Checking Ingress status You might need to check the status of your Ingress to identify the issue with your Serverless application. Procedure Check the IP address of your Ingress by running the following command: USD oc get svc -n istio-system istio-ingressgateway The istio-ingressgateway service is the LoadBalancer service used by Knative. If there is no external IP address, run the following command: USD oc describe svc istio-ingressgateway -n istio-system This command prints the reason why IP addresses were not provisioned. Most likely, it is due to a quota issue. 4.5. Checking route status In some cases, the Route object has issues. You can check its status by using the OpenShift CLI ( oc ). Procedure View the status of the Route object with which you deployed your application by running the following command: USD oc get route <route_name> --output yaml Substitute <route_name> with the name of your Route object. The conditions object in the status object states the reason in case of a failure. 4.6. Checking Ingress and Istio routing Sometimes, when Istio is used as an Ingress layer, the Ingress and Istio routing have issues. You can see the details on them by using the OpenShift CLI ( oc ). Procedure List all Ingress resources and their corresponding labels by running the following command: USD oc get ingresses.networking.internal.knative.dev -o=custom-columns='NAME:.metadata.name,LABELS:.metadata.labels' Example output NAME LABELS helloworld-go map[serving.knative.dev/route:helloworld-go serving.knative.dev/routeNamespace:default serving.knative.dev/service:helloworld-go] In this output, labels serving.knative.dev/route and serving.knative.dev/routeNamespace indicate the Route where the Ingress resource resides. Your Route and Ingress should be listed. If your Ingress does not exist, the route controller assumes that the Revision objects targeted by your Route or Service object are not ready. Proceed with other debugging procedures to diagnose Revision readiness status. If your Ingress is listed, examine the ClusterIngress object created for your route by running the following command: USD oc get ingresses.networking.internal.knative.dev <ingress_name> --output yaml In the status section of the output, if the condition with type=Ready has the status of True , then Ingress is working correctly. Otherwise, the output contains error messages. If Ingress has the status of Ready , then there is a corresponding VirtualService object. Verify the configuration of the VirtualService object by running the following command: USD oc get virtualservice -l networking.internal.knative.dev/ingress=<ingress_name> -n <ingress_namespace> --output yaml The network configuration in the VirtualService object must match that of the Ingress and Route objects. Because the VirtualService object does not expose a Status field, you might need to wait for its settings to propagate. 4.6.1. Additional resources Maistra Service Mesh documentation | [
"Error from server (InternalError): error when applying patch: {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"serving.knative.dev/v1\\\",\\\"kind\\\":\\\"Route\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"route-example\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"traffic\\\":[{\\\"configurationName\\\":\\\"configuration-example\\\",\\\"percent\\\":50}]}}\\n\"}},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}} to: &{0xc421d98240 0xc421e77490 default route-example STDIN 0xc421db0488 264682 false} for: \"STDIN\": Internal error occurred: admission webhook \"webhook.knative.dev\" denied the request: mutation failed: The route must have traffic percent sum equal to 100. ERROR: Non-zero return code '1' from command: Process exited with status 1",
"oc get pods",
"NAME READY STATUS RESTARTS AGE configuration-example-00001-deployment-659747ff99-9bvr4 2/2 Running 0 3h configuration-example-00002-deployment-5f475b7849-gxcht 1/2 CrashLoopBackOff 2 36s",
"oc get pod <pod_name> --output yaml",
"oc get configuration <configuration_name> --output jsonpath=\"{.status.latestCreatedRevisionName}\"",
"oc get revision <revision-name> --output yaml",
"oc get svc -n istio-system istio-ingressgateway",
"oc describe svc istio-ingressgateway -n istio-system",
"oc get route <route_name> --output yaml",
"oc get ingresses.networking.internal.knative.dev -o=custom-columns='NAME:.metadata.name,LABELS:.metadata.labels'",
"NAME LABELS helloworld-go map[serving.knative.dev/route:helloworld-go serving.knative.dev/routeNamespace:default serving.knative.dev/service:helloworld-go]",
"oc get ingresses.networking.internal.knative.dev <ingress_name> --output yaml",
"oc get virtualservice -l networking.internal.knative.dev/ingress=<ingress_name> -n <ingress_namespace> --output yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/debugging-serverless-applications |
Chapter 8. Working with clusters | Chapter 8. Working with clusters 8.1. Viewing system event information in an Red Hat OpenShift Service on AWS cluster Events in Red Hat OpenShift Service on AWS are modeled based on events that happen to API objects in an Red Hat OpenShift Service on AWS cluster. 8.1.1. Understanding events Events allow Red Hat OpenShift Service on AWS to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "ovn-kubernetes": cannot set "ovn-kubernetes" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #... To view events in your project from the Red Hat OpenShift Service on AWS console. Launch the Red Hat OpenShift Service on AWS console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of Red Hat OpenShift Service on AWS. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.12. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.13. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.14. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.15. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.16. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your Red Hat OpenShift Service on AWS nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an Red Hat OpenShift Service on AWS cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an Red Hat OpenShift Service on AWS cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. Red Hat OpenShift Service on AWS calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how Red Hat OpenShift Service on AWS manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), Red Hat OpenShift Service on AWS allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the Red Hat OpenShift Service on AWS scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, Red Hat OpenShift Service on AWS prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on Red Hat OpenShift Service on AWS are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some Red Hat OpenShift Service on AWS clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. 8.4.2. Understanding OpenJDK settings for Red Hat OpenShift Service on AWS The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. Java-based agents can use the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the Java-based agent image, the Red Hat OpenShift Service on AWS Jenkins Maven agent image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file_name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy Red Hat OpenShift Service on AWS can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction Red Hat OpenShift Service on AWS may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. Red Hat OpenShift Service on AWS administrators can manage container density on nodes by configuring pod placement behavior and per-project resource limits that overcommit cannot exceed. Alternatively, administrators can disable project-level resource overcommitment on customer-created namespaces that are not managed by Red Hat. For more information about container resource management, see Additional resources. 8.5.1. Project-level limits In Red Hat OpenShift Service on AWS, overcommitment of project-level resources is enabled by default. If required by your use case, you can disable overcommitment on projects that are not managed by Red Hat. For the list of projects that are managed by Red Hat and cannot be modified, see "Red Hat Managed resources" in Support . 8.5.1.1. Disabling overcommitment for a project If required by your use case, you can disable overcommitment on any project that is not managed by Red Hat. For a list of projects that cannot be modified, see "Red Hat Managed resources" in Support . Prerequisites You are logged in to the cluster using an account with cluster administrator or cluster editor permissions. Procedure Edit the namespace object file: If you are using the web console: Click Administration Namespaces and click the namespace for the project. In the Annotations section, click the Edit button. Click Add more and enter a new annotation that uses a Key of quota.openshift.io/cluster-resource-override-enabled and a Value of false . Click Save . If you are using the ROSA CLI ( rosa ): Edit the namespace: USD rosa edit namespace/<project_name> Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" <.> # ... <.> Setting this annotation to false disables overcommit for this namespace. 8.5.2. Additional resources Restrict resource consumption with limit ranges Red Hat Managed resources | [
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"rosa edit namespace/<project_name>",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/nodes/working-with-clusters |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/installing_red_hat_update_infrastructure/making-open-source-more-inclusive |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.