title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Prerequisites | Chapter 1. Prerequisites Red Hat Enterprise Linux (RHEL) 9 To obtain the latest version of Red Hat Enterprise Linux (RHEL) 9, see Download Red Hat Enterprise Linux . For installation instructions, see the Product Documentation for Red Hat Enterprise Linux 9 . An active subscription to Red Hat Two or more virtual CPUs 4 GB or more of RAM Approximately 30 GB of disk space on your test system, which can be broken down as follows: Approximately 10 GB of disk space for the Red Hat Enterprise Linux (RHEL) operating system. Approximately 10 GB of disk space for Docker storage for running three containers. Approximately 10 GB of disk space for Red Hat Quay local storage. Note CEPH or other local storage might require more memory. More information on sizing can be found at Quay 3.x Sizing Guidelines . The following architectures are supported for Red Hat Quay: amd64/x86_64 s390x ppc64le 1.1. Installing Podman This document uses Podman for creating and deploying containers. For more information on Podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 9 . Important If you do not have Podman installed on your system, the use of equivalent Docker commands might be possible, however this is not recommended. Docker has not been tested with Red Hat Quay 3.12, and will be deprecated in a future release. Podman is recommended for highly available, production quality deployments of Red Hat Quay 3.12. Use the following procedure to install Podman. Procedure Enter the following command to install Podman: USD sudo yum install -y podman Alternatively, you can install the container-tools module, which pulls in the full set of container software packages: USD sudo yum module install -y container-tools | [
"sudo yum install -y podman",
"sudo yum module install -y container-tools"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/proof_of_concept_-_deploying_red_hat_quay/poc-prerequisites |
Chapter 3. The Security Audit Logger | Chapter 3. The Security Audit Logger Red Hat JBoss Data Grid includes a logger to audit security logs for the cache, specifically whether a cache or a cache manager operation was allowed or denied for various operations. The default audit logger is org.infinispan.security.impl.DefaultAuditLogger . This logger outputs audit logs using the available logging framework (for example, JBoss Logging) and provides results at the TRACE level and the AUDIT category. To send the AUDIT category to either a log file, a JMS queue, or a database, use the appropriate log appender. Report a bug 3.1. Configure the Security Audit Logger (Library Mode) Use the following to declaratively configure the audit logger in Red Hat JBoss Data Grid: Use the following to programatically configure the audit logger in JBoss Data Grid: Report a bug | [
"<infinispan> <global-security> <authorization audit-logger = \"org.infinispan.security.impl.DefaultAuditLogger\"> </authorization> </global-security> </infinispan>",
"GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global.security() .authorization() .auditLogger(new DefaultAuditLogger());"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/chap-the_security_audit_logger |
7.2. RHBA-2013:1607 - new packages: gcc-libraries | 7.2. RHBA-2013:1607 - new packages: gcc-libraries New gcc-libraries packages are now available for Red Hat Enterprise Linux 6. The new gcc-libraries packages contain various GCC runtime libraries, such as libatomic and libitm. In Red Hat Enterprise Linux 5.9, libitm was a separate package that included the libitm library. The libitm package is now deprecated and replaced by the gcc-libraries packages. This enhancement update adds the gcc-libraries packages to Red Hat Enterprise Linux 6. (BZ# 906241 ) All users who require gcc-libraries are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhba-2013-1607 |
Support | Support OpenShift Dedicated 4 OpenShift Dedicated Support. Red Hat OpenShift Documentation Team | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"βββ cluster-logging β βββ clo β β βββ cluster-logging-operator-74dd5994f-6ttgt β β βββ clusterlogforwarder_cr β β βββ cr β β βββ csv β β βββ deployment β β βββ logforwarding_cr β βββ collector β β βββ fluentd-2tr64 β βββ curator β β βββ curator-1596028500-zkz4s β βββ eo β β βββ csv β β βββ deployment β β βββ elasticsearch-operator-7dc7d97b9d-jb4r4 β βββ es β β βββ cluster-elasticsearch β β β βββ aliases β β β βββ health β β β βββ indices β β β βββ latest_documents.json β β β βββ nodes β β β βββ nodes_stats.json β β β βββ thread_pool β β βββ cr β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ logs β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β βββ install β β βββ co_logs β β βββ install_plan β β βββ olmo_logs β β βββ subscription β βββ kibana β βββ cr β βββ kibana-9d69668d4-2rkvz βββ cluster-scoped-resources β βββ core β βββ nodes β β βββ ip-10-0-146-180.eu-west-1.compute.internal.yaml β βββ persistentvolumes β βββ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml βββ event-filter.html βββ gather-debug.log βββ namespaces βββ openshift-logging β βββ apps β β βββ daemonsets.yaml β β βββ deployments.yaml β β βββ replicasets.yaml β β βββ statefulsets.yaml β βββ batch β β βββ cronjobs.yaml β β βββ jobs.yaml β βββ core β β βββ configmaps.yaml β β βββ endpoints.yaml β β βββ events β β β βββ curator-1596021300-wn2ks.162634ebf0055a94.yaml β β β βββ curator.162638330681bee2.yaml β β β βββ elasticsearch-delete-app-1596020400-gm6nl.1626341a296c16a1.yaml β β β βββ elasticsearch-delete-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β β β βββ elasticsearch-delete-infra-1596020400-v98tk.1626341a2d821069.yaml β β β βββ elasticsearch-rollover-app-1596020400-cc5vc.1626341a3019b238.yaml β β β βββ elasticsearch-rollover-audit-1596020400-s8d5s.1626341a31f7b315.yaml β β β βββ elasticsearch-rollover-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β β βββ events.yaml β β βββ persistentvolumeclaims.yaml β β βββ pods.yaml β β βββ replicationcontrollers.yaml β β βββ secrets.yaml β β βββ services.yaml β βββ openshift-logging.yaml β βββ pods β β βββ cluster-logging-operator-74dd5994f-6ttgt β β β βββ cluster-logging-operator β β β β βββ cluster-logging-operator β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-74dd5994f-6ttgt.yaml β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff β β β βββ cluster-logging-operator-registry β β β β βββ cluster-logging-operator-registry β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ curator-1596028500-zkz4s β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ elasticsearch-delete-app-1596030300-bpgcx β β β βββ elasticsearch-delete-app-1596030300-bpgcx.yaml β β β βββ indexmanagement β β β βββ indexmanagement β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ fluentd-2tr64 β β β βββ fluentd β β β β βββ fluentd β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ fluentd-2tr64.yaml β β β βββ fluentd-init β β β βββ fluentd-init β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ kibana-9d69668d4-2rkvz β β β βββ kibana β β β β βββ kibana β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ kibana-9d69668d4-2rkvz.yaml β β β βββ kibana-proxy β β β βββ kibana-proxy β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β βββ route.openshift.io β βββ routes.yaml βββ openshift-operators-redhat βββ",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures βββ event-filter.html βββ ip-10-0-192-217-ec2-internal 1 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:31.pcap βββ ip-10-0-201-178-ec2-internal 2 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:30.pcap βββ ip- βββ timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3",
"toolbox",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6",
"Resources: ConfigMap: - namespace: openshift-config name: rosa-brand-logo - namespace: openshift-console name: custom-logo - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-config - namespace: openshift-file-integrity name: fr-aide-conf - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-config - namespace: openshift-monitoring name: cluster-monitoring-config - namespace: openshift-monitoring name: managed-namespaces - namespace: openshift-monitoring name: ocp-namespaces - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter-code - namespace: openshift-monitoring name: sre-dns-latency-exporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-ebs-iops-reporter-code - namespace: openshift-monitoring name: sre-ebs-iops-reporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-stuck-ebs-vols-code - namespace: openshift-monitoring name: sre-stuck-ebs-vols-trusted-ca-bundle - namespace: openshift-security name: osd-audit-policy - namespace: openshift-validation-webhook name: webhook-cert - namespace: openshift name: motd Endpoints: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook Namespace: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-aws-vpce-operator - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-compliance-monkey - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero - name: openshift-monitoring - name: openshift - name: openshift-cluster-version - name: keycloak - name: goalert - name: configure-goalert-operator ReplicationController: - namespace: openshift-monitoring name: sre-ebs-iops-reporter-1 - namespace: openshift-monitoring name: sre-stuck-ebs-vols-1 Secret: - namespace: openshift-authentication name: v4-0-config-user-idp-0-file-data - namespace: openshift-authentication name: v4-0-config-user-template-error - namespace: openshift-authentication name: v4-0-config-user-template-login - namespace: openshift-authentication name: v4-0-config-user-template-provider-selection - namespace: openshift-config name: htpasswd-secret - namespace: openshift-config name: osd-oauth-templates-errors - namespace: openshift-config name: osd-oauth-templates-login - namespace: openshift-config name: osd-oauth-templates-providers - namespace: openshift-config name: rosa-oauth-templates-errors - namespace: openshift-config name: rosa-oauth-templates-login - namespace: openshift-config name: rosa-oauth-templates-providers - namespace: openshift-config name: support - namespace: openshift-config name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-ingress name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-kube-apiserver name: user-serving-cert-000 - namespace: openshift-kube-apiserver name: user-serving-cert-001 - namespace: openshift-monitoring name: dms-secret - namespace: openshift-monitoring name: observatorium-credentials - namespace: openshift-monitoring name: pd-secret - namespace: openshift-scanning name: clam-secrets - namespace: openshift-scanning name: logger-secrets - namespace: openshift-security name: splunk-auth ServiceAccount: - namespace: openshift-backplane-managed-scripts name: osd-backplane - namespace: openshift-backplane-srep name: 6804d07fb268b8285b023bcf65392f0e - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-scanning name: logger-sa - namespace: openshift-scanning name: scanner-sa - namespace: openshift-sre-pruning name: sre-pruner-sa - namespace: openshift-suricata name: suricata-sa - namespace: openshift-validation-webhook name: validation-webhook - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-velero name: velero - namespace: openshift-backplane-srep name: UNIQUE_BACKPLANE_SERVICEACCOUNT_ID Service: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook AddonOperator: - name: addon-operator ValidatingWebhookConfiguration: - name: sre-hiveownership-validation - name: sre-namespace-validation - name: sre-pod-validation - name: sre-prometheusrule-validation - name: sre-regular-user-validation - name: sre-scc-validation - name: sre-techpreviewnoupgrade-validation DaemonSet: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-scanning name: logger - namespace: openshift-scanning name: scanner - namespace: openshift-security name: audit-exporter - namespace: openshift-suricata name: suricata - namespace: openshift-validation-webhook name: validation-webhook DeploymentConfig: - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterRoleBinding: - name: aqua-scanner-binding - name: backplane-cluster-admin - name: backplane-impersonate-cluster-admin - name: bz1980755 - name: configure-alertmanager-operator-prom - name: dedicated-admins-cluster - name: dedicated-admins-registry-cas-cluster - name: logger-clusterrolebinding - name: openshift-backplane-managed-scripts-reader - name: osd-cluster-admin - name: osd-cluster-ready - name: osd-delete-backplane-script-resources - name: osd-delete-ownerrefs-serviceaccounts - name: osd-patch-subscription-source - name: osd-rebalance-infra-nodes - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: splunk-forwarder-operator-clusterrolebinding - name: sre-pod-network-connectivity-check-pruner - name: sre-pruner-buildsdeploys-pruning - name: velero - name: webhook-validation ClusterRole: - name: backplane-cee-readers-cluster - name: backplane-impersonate-cluster-admin - name: backplane-readers-cluster - name: backplane-srep-admins-cluster - name: backplane-srep-admins-project - name: bz1980755 - name: dedicated-admins-aggregate-cluster - name: dedicated-admins-aggregate-project - name: dedicated-admins-cluster - name: dedicated-admins-manage-operators - name: dedicated-admins-project - name: dedicated-admins-registry-cas-cluster - name: dedicated-readers - name: image-scanner - name: logger-clusterrole - name: openshift-backplane-managed-scripts-reader - name: openshift-splunk-forwarder-operator - name: osd-cluster-ready - name: osd-custom-domains-dedicated-admin-cluster - name: osd-delete-backplane-script-resources - name: osd-delete-backplane-serviceaccounts - name: osd-delete-ownerrefs-serviceaccounts - name: osd-get-namespace - name: osd-netnamespaces-dedicated-admin-cluster - name: osd-patch-subscription-source - name: osd-readers-aggregate - name: osd-rebalance-infra-nodes - name: osd-rebalance-infra-nodes-openshift-pod-rebalance - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: sre-allow-read-machine-info - name: sre-pruner-buildsdeploys-cr - name: webhook-validation-cr RoleBinding: - namespace: kube-system name: cloud-ingress-operator-cluster-config-v1-reader - namespace: kube-system name: managed-velero-operator-cluster-config-v1-reader - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-mustgather - namespace: openshift-backplane-managed-scripts name: backplane-srep-mustgather - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-cloud-ingress-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-custom-domains-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-image-registry name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: admin-dedicated-admins - namespace: openshift-logging name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-logging name: openshift-logging-dedicated-admins - namespace: openshift-logging name: openshift-logging:serviceaccounts:dedicated-admin - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: sre-ebs-iops-reporter-read-machine-info - namespace: openshift-machine-api name: sre-stuck-ebs-vols-read-machine-info - namespace: openshift-managed-node-metadata-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-must-gather-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-network-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ocm-agent-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-operators-redhat name: admin-dedicated-admins - namespace: openshift-operators-redhat name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-operators-redhat name: openshift-operators-redhat-dedicated-admins - namespace: openshift-operators-redhat name: openshift-operators-redhat:serviceaccounts:dedicated-admin - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-route-monitor-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-scanning name: scanner-rolebinding - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-splunk-forwarder-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-suricata name: suricata-rolebinding - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-create - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-edit - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-managed-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-velero name: prometheus-k8s Role: - namespace: kube-system name: cluster-config-v1-reader - namespace: kube-system name: cluster-config-v1-reader-cio - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-pcap-collector - namespace: openshift-backplane-managed-scripts name: backplane-srep-pcap-collector - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: dedicated-admins-openshift-logging - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-scanning name: scanner-role - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-suricata name: suricata-role - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-create-cm - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-manage-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: prometheus-k8s CronJob: - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-sre-pruning name: builds-pruner - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-sre-pruning name: deployments-pruner Job: - namespace: openshift-monitoring name: osd-cluster-ready CredentialsRequest: - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-aws - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-gcp - namespace: openshift-monitoring name: sre-ebs-iops-reporter-aws-credentials - namespace: openshift-monitoring name: sre-stuck-ebs-vols-aws-credentials - namespace: openshift-velero name: managed-velero-operator-iam-credentials-aws - namespace: openshift-velero name: managed-velero-operator-iam-credentials-gcp APIScheme: - namespace: openshift-cloud-ingress-operator name: rh-api PublishingStrategy: - namespace: openshift-cloud-ingress-operator name: publishingstrategy ScanSettingBinding: - namespace: openshift-compliance name: fedramp-high-ocp - namespace: openshift-compliance name: fedramp-high-rhcos ScanSetting: - namespace: openshift-compliance name: osd TailoredProfile: - namespace: openshift-compliance name: rhcos4-high-rosa OAuth: - name: cluster EndpointSlice: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics-rhtwg - namespace: openshift-monitoring name: sre-dns-latency-exporter-4cw9r - namespace: openshift-monitoring name: sre-ebs-iops-reporter-6tx5g - namespace: openshift-monitoring name: sre-stuck-ebs-vols-gmdhs - namespace: openshift-scanning name: loggerservice-zprbq - namespace: openshift-security name: audit-exporter-nqfdk - namespace: openshift-validation-webhook name: validation-webhook-97b8t FileIntegrity: - namespace: openshift-file-integrity name: osd-fileintegrity MachineHealthCheck: - namespace: openshift-machine-api name: srep-infra-healthcheck - namespace: openshift-machine-api name: srep-metal-worker-healthcheck - namespace: openshift-machine-api name: srep-worker-healthcheck MachineSet: - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-infra-us-east-1a - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-worker-us-east-1a ContainerRuntimeConfig: - name: custom-crio KubeletConfig: - name: custom-kubelet MachineConfig: - name: 00-master-chrony - name: 00-worker-chrony SubjectPermission: - namespace: openshift-rbac-permissions name: backplane-cee - namespace: openshift-rbac-permissions name: backplane-csa - namespace: openshift-rbac-permissions name: backplane-cse - namespace: openshift-rbac-permissions name: backplane-csm - namespace: openshift-rbac-permissions name: backplane-mobb - namespace: openshift-rbac-permissions name: backplane-srep - namespace: openshift-rbac-permissions name: backplane-tam - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins - namespace: openshift-rbac-permissions name: dedicated-admins-alert-routing-edit - namespace: openshift-rbac-permissions name: dedicated-admins-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins-customer-monitoring - namespace: openshift-rbac-permissions name: osd-delete-backplane-serviceaccounts VeleroInstall: - namespace: openshift-velero name: cluster PrometheusRule: - namespace: openshift-monitoring name: rhmi-sre-cluster-admins - namespace: openshift-monitoring name: rhoam-sre-cluster-admins - namespace: openshift-monitoring name: sre-alertmanager-silences-active - namespace: openshift-monitoring name: sre-alerts-stuck-builds - namespace: openshift-monitoring name: sre-alerts-stuck-volumes - namespace: openshift-monitoring name: sre-cloud-ingress-operator-offline-alerts - namespace: openshift-monitoring name: sre-avo-pendingacceptance - namespace: openshift-monitoring name: sre-configure-alertmanager-operator-offline-alerts - namespace: openshift-monitoring name: sre-control-plane-resizing-alerts - namespace: openshift-monitoring name: sre-dns-alerts - namespace: openshift-monitoring name: sre-ebs-iops-burstbalance - namespace: openshift-monitoring name: sre-elasticsearch-jobs - namespace: openshift-monitoring name: sre-elasticsearch-managed-notification-alerts - namespace: openshift-monitoring name: sre-excessive-memory - namespace: openshift-monitoring name: sre-fr-alerts-low-disk-space - namespace: openshift-monitoring name: sre-haproxy-reload-fail - namespace: openshift-monitoring name: sre-internal-slo-recording-rules - namespace: openshift-monitoring name: sre-kubequotaexceeded - namespace: openshift-monitoring name: sre-leader-election-master-status-alerts - namespace: openshift-monitoring name: sre-managed-kube-apiserver-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-controller-manager-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-scheduler-missing-on-node - namespace: openshift-monitoring name: sre-managed-node-metadata-operator-alerts - namespace: openshift-monitoring name: sre-managed-notification-alerts - namespace: openshift-monitoring name: sre-managed-upgrade-operator-alerts - namespace: openshift-monitoring name: sre-managed-velero-operator-alerts - namespace: openshift-monitoring name: sre-node-unschedulable - namespace: openshift-monitoring name: sre-oauth-server - namespace: openshift-monitoring name: sre-pending-csr-alert - namespace: openshift-monitoring name: sre-proxy-managed-notification-alerts - namespace: openshift-monitoring name: sre-pruning - namespace: openshift-monitoring name: sre-pv - namespace: openshift-monitoring name: sre-router-health - namespace: openshift-monitoring name: sre-runaway-sdn-preventing-container-creation - namespace: openshift-monitoring name: sre-slo-recording-rules - namespace: openshift-monitoring name: sre-telemeter-client - namespace: openshift-monitoring name: sre-telemetry-managed-labels-recording-rules - namespace: openshift-monitoring name: sre-upgrade-send-managed-notification-alerts - namespace: openshift-monitoring name: sre-uptime-sla ServiceMonitor: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterUrlMonitor: - namespace: openshift-route-monitor-operator name: api RouteMonitor: - namespace: openshift-route-monitor-operator name: console NetworkPolicy: - namespace: openshift-deployment-validation-operator name: allow-from-openshift-insights - namespace: openshift-deployment-validation-operator name: allow-from-openshift-olm ManagedNotification: - namespace: openshift-ocm-agent-operator name: sre-elasticsearch-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-proxy-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-upgrade-managed-notifications OcmAgent: - namespace: openshift-ocm-agent-operator name: ocmagent - namespace: openshift-security name: audit-exporter Console: - name: cluster CatalogSource: - namespace: openshift-addon-operator name: addon-operator-catalog - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-registry - namespace: openshift-compliance name: compliance-operator-registry - namespace: openshift-container-security name: container-security-operator-registry - namespace: openshift-custom-domains-operator name: custom-domains-operator-registry - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-catalog - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator-registry - namespace: openshift-file-integrity name: file-integrity-operator-registry - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-catalog - namespace: openshift-monitoring name: configure-alertmanager-operator-registry - namespace: openshift-must-gather-operator name: must-gather-operator-registry - namespace: openshift-observability-operator name: observability-operator-catalog - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-registry - namespace: openshift-osd-metrics name: osd-metrics-exporter-registry - namespace: openshift-rbac-permissions name: rbac-permissions-operator-registry - namespace: openshift-route-monitor-operator name: route-monitor-operator-registry - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-catalog - namespace: openshift-velero name: managed-velero-operator-registry OperatorGroup: - namespace: openshift-addon-operator name: addon-operator-og - namespace: openshift-aqua name: openshift-aqua - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-codeready-workspaces name: openshift-codeready-workspaces - namespace: openshift-compliance name: compliance-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-customer-monitoring name: openshift-customer-monitoring - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-og - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-logging name: openshift-logging - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-og - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator-og - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-og - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-og - namespace: openshift-velero name: managed-velero-operator Subscription: - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-compliance name: compliance-operator-sub - namespace: openshift-container-security name: container-security-operator-sub - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-deployment-validation-operator name: deployment-validation-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator-sub - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: openshift-splunk-forwarder-operator - namespace: openshift-velero name: managed-velero-operator PackageManifest: - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-deployment-validation-operator name: managed-upgrade-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-custom-domains-operator name: managed-node-metadata-operator - namespace: openshift-route-monitor-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: deployment-validation-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-compliance name: compliance-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator Status: - {} Project: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero ClusterResourceQuota: - name: loadbalancer-quota - name: persistent-volume-quota SecurityContextConstraints: - name: osd-scanning-scc - name: osd-suricata-scc - name: pcap-dedicated-admins - name: splunkforwarder SplunkForwarder: - namespace: openshift-security name: splunkforwarder Group: - name: cluster-admins - name: dedicated-admins User: - name: backplane-cluster-admin Backup: - namespace: openshift-velero name: daily-full-backup-20221123112305 - namespace: openshift-velero name: daily-full-backup-20221125042537 - namespace: openshift-velero name: daily-full-backup-20221126010038 - namespace: openshift-velero name: daily-full-backup-20221127010039 - namespace: openshift-velero name: daily-full-backup-20221128010040 - namespace: openshift-velero name: daily-full-backup-20221129050847 - namespace: openshift-velero name: hourly-object-backup-20221128051740 - namespace: openshift-velero name: hourly-object-backup-20221128061740 - namespace: openshift-velero name: hourly-object-backup-20221128071740 - namespace: openshift-velero name: hourly-object-backup-20221128081740 - namespace: openshift-velero name: hourly-object-backup-20221128091740 - namespace: openshift-velero name: hourly-object-backup-20221129050852 - namespace: openshift-velero name: hourly-object-backup-20221129051747 - namespace: openshift-velero name: weekly-full-backup-20221116184315 - namespace: openshift-velero name: weekly-full-backup-20221121033854 - namespace: openshift-velero name: weekly-full-backup-20221128020040 Schedule: - namespace: openshift-velero name: daily-full-backup - namespace: openshift-velero name: hourly-object-backup - namespace: openshift-velero name: weekly-full-backup",
"apiVersion: v1 kind: ConfigMap metadata: name: ocp-namespaces namespace: openshift-monitoring data: managed_namespaces.yaml: | Resources: Namespace: - name: kube-system - name: openshift-apiserver - name: openshift-apiserver-operator - name: openshift-authentication - name: openshift-authentication-operator - name: openshift-cloud-controller-manager - name: openshift-cloud-controller-manager-operator - name: openshift-cloud-credential-operator - name: openshift-cloud-network-config-controller - name: openshift-cluster-api - name: openshift-cluster-csi-drivers - name: openshift-cluster-machine-approver - name: openshift-cluster-node-tuning-operator - name: openshift-cluster-samples-operator - name: openshift-cluster-storage-operator - name: openshift-config - name: openshift-config-managed - name: openshift-config-operator - name: openshift-console - name: openshift-console-operator - name: openshift-console-user-settings - name: openshift-controller-manager - name: openshift-controller-manager-operator - name: openshift-dns - name: openshift-dns-operator - name: openshift-etcd - name: openshift-etcd-operator - name: openshift-host-network - name: openshift-image-registry - name: openshift-ingress - name: openshift-ingress-canary - name: openshift-ingress-operator - name: openshift-insights - name: openshift-kni-infra - name: openshift-kube-apiserver - name: openshift-kube-apiserver-operator - name: openshift-kube-controller-manager - name: openshift-kube-controller-manager-operator - name: openshift-kube-scheduler - name: openshift-kube-scheduler-operator - name: openshift-kube-storage-version-migrator - name: openshift-kube-storage-version-migrator-operator - name: openshift-machine-api - name: openshift-machine-config-operator - name: openshift-marketplace - name: openshift-monitoring - name: openshift-multus - name: openshift-network-diagnostics - name: openshift-network-operator - name: openshift-nutanix-infra - name: openshift-oauth-apiserver - name: openshift-openstack-infra - name: openshift-operator-lifecycle-manager - name: openshift-operators - name: openshift-ovirt-infra - name: openshift-sdn - name: openshift-ovn-kubernetes - name: openshift-platform-operators - name: openshift-route-controller-manager - name: openshift-service-ca - name: openshift-service-ca-operator - name: openshift-user-workload-monitoring - name: openshift-vsphere-infra",
"addon-namespaces: ocs-converged-dev: openshift-storage managed-api-service-internal: redhat-rhoami-operator codeready-workspaces-operator: codeready-workspaces-operator managed-odh: redhat-ods-operator codeready-workspaces-operator-qe: codeready-workspaces-operator-qe integreatly-operator: redhat-rhmi-operator nvidia-gpu-addon: redhat-nvidia-gpu-addon integreatly-operator-internal: redhat-rhmi-operator rhoams: redhat-rhoam-operator ocs-converged: openshift-storage addon-operator: redhat-addon-operator prow-operator: prow cluster-logging-operator: openshift-logging advanced-cluster-management: redhat-open-cluster-management cert-manager-operator: redhat-cert-manager-operator dba-operator: addon-dba-operator reference-addon: redhat-reference-addon ocm-addon-test-operator: redhat-ocm-addon-test-operator",
"[ { \"webhookName\": \"clusterlogging-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"logging.openshift.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterloggings\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may set log retention outside the allowed range of 0-7 days\" }, { \"webhookName\": \"clusterrolebindings-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"rbac.authorization.k8s.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterrolebindings\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the cluster role bindings under the managed namespaces: (^openshift-.*|kube-system)\" }, { \"webhookName\": \"customresourcedefinitions-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"apiextensions.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"customresourcedefinitions\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not change CustomResourceDefinitions managed by Red Hat.\" }, { \"webhookName\": \"hiveownership-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"quota.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterresourcequotas\" ], \"scope\": \"Cluster\" } ], \"webhookObjectSelector\": { \"matchLabels\": { \"hive.openshift.io/managed\": \"true\" } }, \"documentString\": \"Managed OpenShift customers may not edit certain managed resources. A managed resource has a \\\"hive.openshift.io/managed\\\": \\\"true\\\" label.\" }, { \"webhookName\": \"imagecontentpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagedigestmirrorsets\", \"imagetagmirrorsets\" ], \"scope\": \"Cluster\" }, { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagecontentsourcepolicies\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not create ImageContentSourcePolicy, ImageDigestMirrorSet, or ImageTagMirrorSet resources that configure mirrors that would conflict with system registries (e.g. quay.io, registry.redhat.io, registry.access.redhat.com, etc). For more details, see https://docs.openshift.com/\" }, { \"webhookName\": \"ingress-config-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresses\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify ingress config resources because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"ingresscontroller-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresscontroller\", \"ingresscontrollers\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customer may create IngressControllers without necessary taints. This can cause those workloads to be provisioned on infra or master nodes.\" }, { \"webhookName\": \"namespace-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"namespaces\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify namespaces specified in the [openshift-monitoring/managed-namespaces openshift-monitoring/ocp-namespaces] ConfigMaps because customer workloads should be placed in customer-created namespaces. Customers may not create namespaces identified by this regular expression (^comUSD|^ioUSD|^inUSD) because it could interfere with critical DNS resolution. Additionally, customers may not set or change the values of these Namespace labels [managed.openshift.io/storage-pv-quota-exempt managed.openshift.io/service-lb-quota-exempt].\" }, { \"webhookName\": \"networkpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"networking.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networkpolicies\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create NetworkPolicies in namespaces managed by Red Hat.\" }, { \"webhookName\": \"node-validation-osd\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"nodes\", \"nodes/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not alter Node objects.\" }, { \"webhookName\": \"pod-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"v1\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"pods\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may use tolerations on Pods that could cause those Pods to be scheduled on infra or master nodes.\" }, { \"webhookName\": \"prometheusrule-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"monitoring.coreos.com\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"prometheusrules\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create PrometheusRule in namespaces managed by Red Hat.\" }, { \"webhookName\": \"regular-user-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"cloudcredential.openshift.io\", \"machine.openshift.io\", \"admissionregistration.k8s.io\", \"addons.managed.openshift.io\", \"cloudingress.managed.openshift.io\", \"managed.openshift.io\", \"ocmagent.managed.openshift.io\", \"splunkforwarder.managed.openshift.io\", \"upgrade.managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"*/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"autoscaling.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterautoscalers\", \"machineautoscalers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterversions\", \"clusterversions/status\", \"schedulers\", \"apiservers\", \"proxies\" ], \"scope\": \"*\" }, { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"configmaps\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"machineconfiguration.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"machineconfigs\", \"machineconfigpools\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"kubeapiservers\", \"openshiftapiservers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"subjectpermissions\", \"subjectpermissions/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"network.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"netnamespaces\", \"netnamespaces/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not manage any objects in the following APIGroups [autoscaling.openshift.io network.openshift.io machine.openshift.io admissionregistration.k8s.io addons.managed.openshift.io cloudingress.managed.openshift.io splunkforwarder.managed.openshift.io upgrade.managed.openshift.io managed.openshift.io ocmagent.managed.openshift.io config.openshift.io machineconfiguration.openshift.io operator.openshift.io cloudcredential.openshift.io], nor may Managed OpenShift customers alter the APIServer, KubeAPIServer, OpenShiftAPIServer, ClusterVersion, Proxy or SubjectPermission objects.\" }, { \"webhookName\": \"scc-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"security.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"securitycontextconstraints\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify the following default SCCs: [anyuid hostaccess hostmount-anyuid hostnetwork hostnetwork-v2 node-exporter nonroot nonroot-v2 privileged restricted restricted-v2]\" }, { \"webhookName\": \"sdn-migration-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networks\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify the network config type because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"service-mutation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"services\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"LoadBalancer-type services on Managed OpenShift clusters must contain an additional annotation for managed policy compliance.\" }, { \"webhookName\": \"serviceaccount-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"serviceaccounts\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the service accounts under the managed namespacesγ\" }, { \"webhookName\": \"techpreviewnoupgrade-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"featuregates\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not use TechPreviewNoUpgrade FeatureGate that could prevent any future ability to do a y-stream upgrade to their clusters.\" } ]"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/support/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/user_interface_guide/making-open-source-more-inclusive |
7.38. device-mapper-multipath | 7.38. device-mapper-multipath 7.38.1. RHBA-2013:0458 - device-mapper-multipath Updated device-mapper-multipath packages that fix numerous bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Bug Fixes BZ#578114 When the kpartx tool tried to delete a loop device that was previously created, and the udev utility had this loop device still open, the delete process would fail with the EBUSY error and kpartx did not attempt retry this operation. The kpartx tool has been modified to wait for one second and then retry deleting up to three times after the EBUSY error. As a result, loop devices created by kpartx are now always deleted as expected. BZ#595692 The multipathd daemon only checked SCSI IDs when determining World Wide Identifiers ( WWIDs ) for devices. However, CCISS devices do not support SCSI IDs and could not be used by Device Mapper Multipath . With this update, multipathd checks CCISS devices for CCISS IDs properly and the devices are detected as expected. BZ#810755 Some device configurations in the /usr/share/doc/device-mapper-multipath-0.X.X/multipath.conf.defaults file were out of date. Consequently, if users copied those configurations into the /etc/multipath.conf file, their devices would be misconfigured. The multipath.conf.defaults file has been updated and users can now copy configurations from it without misconfiguring their devices. Note that copying configurations from the multipath.conf.defaults file is not recommended as the configurations in that file are built into dm-multipath by default. BZ# 810788 Previously, Device Mapper Multipath stored multiple duplicate blacklist entries, which were consequently shown when listing the device-mapper-multipath's configuration. Device Mapper Multipath has been modified to check for duplicates before storing configuration entries and to store only the unique ones. BZ#813963 Device Mapper Multipath had two Asymmetric Logical Unit Access ( ALUA ) prioritizers, which checked two different values. Certain ALUA setups were not correctly failing back to the primary path using either prioritizer because both values need to be checked and neither prioritizer checked them both. With this update, configuration options of both ALUA prioritizers now select the same prioritizer function, which checks both values as expected. BZ#816717 When removing kpartx device partitions, the multipath -f option accepted only the device name, not the full pathname. Consequently, an attempt to delete a mulitpath device by the full pathname failed if the device had the kpartx partitions. Device Mapper Mulitpath has been modified to except the full pathname, when removing kpartx device partitions and deleting process no longer fails in the described scenario. BZ#821885 Previously, the multipath -c option incorrectly listed SCSI devices, which were blacklisted by device type, as valid mulitpath path devices. As a consequence, Device Mapper Multipath could remove the partitions from SCSI devices that never ended up getting multipathed. With this update, multipath -c now checks if a SCSI device is blacklisted by device type, and reports it as invalid if it is. BZ# 822389 On reload, if a multipath device was not set to use the user_friendly_names parameter or a user-defined alias, Device Mapper Multipath would use its existing name instead of setting the WWID . Consequently, disabling user_friendly_names did not cause the multipath device names to change back to WWIDs on reload. This bug has been fixed and Device Mapper Mulitpath now sets the device name to its WWID if no user_friendly_names or user defined aliases are set. As a result, disabling user_friendly_names now allows device names to switch back to WWIDs on reload. BZ#829065 When the Redundant Disk Array Controller ( RDAC ) checker returned the DID_SOFT_ERROR error, Device Mapper Multipath did not retry running the RDAC checker. This behavior caused Device Mapper Multipath to fail paths for transient issues that may have been resolved if it retried the checker. Device Mapper Multipath has been modified to retry the RDAC checker if it receives the DID_SOFT_ERROR error and no longer fails paths due to this error. BZ#831045 When a multipath vector, which is a dynamically allocated array, was shrunk, Device Mapper Multipath was not reassigning the pointer to the array. Consequently, if the array location was changed by the shrinking, Device Mapper Multipath would corrupt its memory with unpredictable results. The underlying source code has been modified and Device Mapper Multipath now correctly reassigns the pointer after the array has been shrunk. BZ#836890 Device Mapper Multipath was occasionally assigning a WWID with a white space for AIX VDASD devices. As a consequence, there was no single blacklist of WWID entry that could blacklist the device on all machines. With this update, Device Mapper Multipath assigns WWIDs without any white space characters for AIX VDASD devices, so that all machines assign the same WWID to an AIX VDASD device and the user is always able to blacklist the device on all machines. BZ#841732 If two multipath devices had their aliases swapped, Device Mapper Multipath switched their tables. Consequently, if the user switched aliases on two devices, any application using the device would be pointed to the incorrect Logical Unit Number ( LUN ). Device Mapper Multipath has been modified to check if the device's new alias matches a different multipath device, and if so, to not switch to it. BZ# 860748 Previously, Device Mapper Multipath did not check the device type and WWID blacklists as soon as this information was available for a path device. Device Mapper Multipath has been modified to check the device type and WWID blacklists as soon as this information is available. As a result, Device Mapper Multipath no longer waits before blacklisting invalid paths. BZ# 869253 Previously, the multipathd daemon and the kpartx tool did not instruct the libdevmapper utility to skip the device creation process and let udev create it. As a consequence, sometimes libdevmapper created a block device in the /dev/mapper/ directory, and sometimes udev created a symbolic link in the same directory. With this update, multipathd and kpartx prevent libdevmapper from creating a block device and udev always creates a symbolic link in the /dev/mapper/ directory as expected. Enhancements BZ# 619173 This enhancement adds a built-in configuration for SUN StorageTek 6180 to Device Mapper Multipath . BZ#735459 To set up persistent reservations on multipath devices, it was necessary to set it up on all of the path devices. If a path device was added later, the user had to manually add reservations to that path. This enhancement adds the ability to set up and manage SCSI persistent reservations using device-mapper devices with the mpathpersist utility. As a result, when path devices are added, persistent reservations are set up as well. BZ#810989 This enhancement updates the multipathd init script to load the dm-multipathd module, so that users do not have to do this manually in cases when no /etc/multipath.conf file is present during boot. Note that it is recommended to create the multipath.conf file by running the mpathconf --enable command, which also loads the dm-multipath module. BZ#818367 When the RDAC path device is in service mode, it is unable to handle I/O requests. With this enhancement, Device Mapper Multipath puts an RDAC path device into a failed state if it is in the service mode. BZ#839386 This update adds two new options to the defaults and devices sections of the multipath.conf file; the retain_attached_hw_hander option and the detect_prio option. By default, both of these options are set to no in the defaults section of the multipath.conf file. However, they are set to yes in the NETAPP/LUN device configuration file. If retain_attach_hw_handler is set to yes and the SCSI layer has attached a hardware handler to the device, Device Mapper Multipath sets the hardware as usual. If detect_prio is set to yes , Device Mapper Multipath will check if the device supports ALUA . If so, it automatically sets the prioritizer to the alua value. If the device does not support ALUA , Device Mapper Multipath sets the prioritizer as usual. This behavior allows NETAPP devices to work in ALUA or non- ALUA mode without making users change to built-in config. In order for retain_attached_hw_handler to work, the SCSI layer must have already attached the device handler. To do this, the appropriate scsi_dh_XXX module, for instance scsi_dh_alua , must be loaded before the SCSI layer discovers the devices. To guarantee this, add the following parameter to the kernel command line: 7.38.2. RHBA-2013:1127 - device-mapper-multipath bug fix update Updated device-mapper-multipath packages that fix one bug are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Bug Fix BZ# 988704 Prior to this update, Device Mapper Multipath did not check for NULL pointers before dereferencing them in the sysfs functions. As a result, the multipathd daemon could crash if a multipath device was resized while a path device was being removed. With this update, Device Mapper Multipath checks for NULL pointers in sysfs functions and no longer crashes when a multipath device is resized at the same time as a path device is removed. Users of device-mapper-multipath are advised to upgrade to these updated packages, which fix this bug. 7.38.3. RHEA-2013:1176 - device-mapper-multipath enhancement update Updated device-mapper-multipath packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Enhancement BZ# 993545 This update adds a new /etc/multipath.conf file default keyword, "reload_readwrite". If set to "yes", multipathd will listen to path device change events, and if the device has read-write access, it will reload the multipath device. This allows multipath devices to automatically have read-write permissions, as soon as the path devices have read-write access, instead of requiring manual intervention. Thus, when all the path devices belonging to a multipath device have read-write access, the multipath device will automatically allow read-write permissions. Users of device-mapper-multipath are advised to upgrade to these updated packages, which add this enhancement. | [
"rdloaddriver=scsi_dh_XXX"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/device-mapper-multipath |
function::task_stime_tid | function::task_stime_tid Name function::task_stime_tid - System time of the given task Synopsis Arguments tid Thread id of the given task Description Returns the system time of the given task in cputime, or zero if the task doesn't exist. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task. | [
"task_stime_tid:long(tid:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-stime-tid |
function::task_nice | function::task_nice Name function::task_nice - The nice value of the task Synopsis Arguments task task_struct pointer Description This function returns the nice value of the given task. | [
"task_nice:long(task:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-nice |
Chapter 3. Enabling Linux control group version 1 (cgroup v1) | Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.14 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes | [
"apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installation_configuration/enabling-cgroup-v1 |
function::kernel_char | function::kernel_char Name function::kernel_char - Retrieves a char value stored in kernel memory. Synopsis Arguments addr The kernel address to retrieve the char from. General Syntax kernel_char:long(addr:long) Description Returns the char value from a given kernel memory address. Reports an error when reading from the given address fails. | [
"function kernel_char:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kernel-char |
Chapter 26. Headless Systems | Chapter 26. Headless Systems When installing headless systems, you can only choose between an automated Kickstart installation and an interactive VNC installation using Connect Mode. For more information about automated Kickstart installation, see Section 27.3.1, "Kickstart Commands and Options" . The general process for interactive VNC installation is described below. Set up a network boot server to start the installation. Information about installing and performing basic configurating of a network boot server can be found in Chapter 24, Preparing for a Network Installation . Configure the server to use the boot options for a Connect Mode VNC installation. For information on these boot options, see Section 25.2.2, "Installing in VNC Connect Mode" . Follow the procedure for VNC Installation using Connect Mode as described in the procedure Procedure 25.2, "Starting VNC in Connect Mode" . However, when directed to boot the system, boot it from the network server instead of local media. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-headless-installations |
Red Hat build of OpenTelemetry | Red Hat build of OpenTelemetry OpenShift Container Platform 4.15 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-opentelemetry-operator",
"oc new-project <project_of_opentelemetry_collector_instance>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: receivers: 1 otlp: protocols: grpc: http: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: 2 batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: 3 debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF",
"oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml",
"oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5",
"apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"",
"instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"config: service: telemetry: logs: level: debug 1",
"config: service: telemetry: metrics: address: \":8888\" 1",
"oc port-forward <collector_pod>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true",
"config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]",
"oc get instrumentation -n <workload_project> 1",
"oc get events -n <workload_project> 1",
"... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation",
"oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow",
"instrumentation.opentelemetry.io/inject-python=\"true\"",
"oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'",
"instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"",
"oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'",
"oc logs <application_pod> -n <workload_project>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_opentelemetry_instance>",
"oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>",
"oc get deployments -n <project_of_opentelemetry_instance>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/red_hat_build_of_opentelemetry/index |
4.3. Configuring USB Devices | 4.3. Configuring USB Devices A virtual machine connected with the SPICE protocol can be configured to connect directly to USB devices. The USB device will only be redirected if the virtual machine is active, in focus and is run from the VM Portal. USB redirection can be manually enabled each time a device is plugged in or set to automatically redirect to active virtual machines in the Console Options window. Important Note the distinction between the client machine and guest machine. The client is the hardware from which you access a guest. The guest is the virtual desktop or virtual server which is accessed through the VM Portal or Administration Portal. USB redirection Enabled mode allows KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual (guest) machines require no guest-installed agents or drivers for native USB. On Red Hat Enterprise Linux clients, all packages required for USB redirection are provided by the virt-viewer package. On Windows clients, you must also install the usbdk package. Enabled USB mode is supported on the following clients and guests: Note If you have a 64-bit architecture PC, you must use the 64-bit version of Internet Explorer to install the 64-bit version of the USB driver. The USB redirection will not work if you install the 32-bit version on a 64-bit architecture. As long as you initially install the correct USB type, you can access USB redirection from both 32- and 64-bit browsers. 4.3.1. Using USB Devices on a Windows Client The usbdk driver must be installed on the Windows client for the USB device to be redirected to the guest. Ensure the version of usbdk matches the architecture of the client machine. For example, the 64-bit version of usbdk must be installed on 64-bit Windows machines. Note USB redirection is only supported when you open the virtual machine from the VM Portal. Using USB Devices on a Windows Client When the usbdk driver is installed, select a virtual machine that has been configured to use the SPICE protocol. Ensure USB support is set to Enabled : Click Edit . Click the Console tab. Select Enabled from the USB Support drop-down list. Click OK . Click Console Console Options . Select the Enable USB Auto-Share check box and click OK . Start the virtual machine from the VM Portal and click Console to connect to that virtual machine. Plug your USB device into the client machine to make it appear automatically on the guest machine. 4.3.2. Using USB Devices on a Red Hat Enterprise Linux Client The usbredir package enables USB redirection from Red Hat Enterprise Linux clients to virtual machines. usbredir is a dependency of the virt-viewer package, and is automatically installed together with that package. Note USB redirection is only supported when you open the virtual machine from the VM Portal. Using USB devices on a Red Hat Enterprise Linux client Click Compute Virtual Machines and select a virtual machine that has been configured to use the SPICE protocol. Ensure USB support is set to Enabled : Click Edit . Click the Console tab. Select Enabled from the USB Support drop-down list. Click OK . Click Console Console Options . Select the Enable USB Auto-Share check box and click OK . Start the virtual machine from the VM Portal and click Console to connect to that virtual machine. Plug your USB device into the client machine to make it appear automatically on the guest machine. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-configuring_usb_devices |
Chapter 8. Streams for Apache Kafka Console overview | Chapter 8. Streams for Apache Kafka Console overview The Streams for Apache Kafka Console provides a user interface to facilitate the administration of Kafka clusters, delivering real-time insights for monitoring, managing, and optimizing each cluster from its user interface. Connect a Kafka cluster managed by Streams for Apache Kafka to gain real-time insights and optimize cluster performance from its user interface. The console's homepage displays connected Kafka clusters, allowing you to access detailed information on components such as brokers, topics, partitions, and consumer groups. From the console, you can view the status of a Kafka cluster before navigating to view information on the cluster's brokers and topics, or the consumer groups connected to the Kafka cluster. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/con-console-overview-str |
Chapter 3. Installing a cluster | Chapter 3. Installing a cluster You can install a basic OpenShift Container Platform cluster using the Agent-based Installer. For procedures that include optional customizations you can make while using the Agent-based Installer, see Installing a cluster with customizations . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 3.2. Installing OpenShift Container Platform with the Agent-based Installer The following procedures deploy a single-node OpenShift Container Platform in a disconnected environment. You can use these procedures as a basis and modify according to your requirements. 3.2.1. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Note Currently, downloading the Agent-based Installer is not supported on the IBM Z(R) ( s390x ) architecture. The recommended method is by creating PXE assets. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 3.2.2. Creating the configuration inputs You must create the configuration files that are used by the installation program to create the agent image. Procedure Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Create the install-config.yaml file by running the following command: USD cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF 1 Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. 7 Provide the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. You must specify this parameter if you are using a disconnected mirror registry. 8 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. You must specify this parameter if you are using a disconnected mirror registry. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using the oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. The ImageContentSourcePolicy resource is deprecated. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided host networkConfig parameter. 3.2.3. Creating and booting the agent image Use this procedure to boot the agent image on your machines. Procedure Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Boot the agent.x86_64.iso or agent.aarch64.iso image on the bare metal machines. 3.2.4. Verifying that the current installation host can pull release images After you boot the agent image and network services are made available to the host, the agent console application performs a pull check to verify that the current host can retrieve release images. If the primary pull check passes, you can quit the application to continue with the installation. If the pull check fails, the application performs additional checks, as seen in the Additional checks section of the TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily critical as long as the primary pull check succeeds. If there are host network configuration issues that might cause an installation to fail, you can use the console application to make adjustments to your network configurations. Important If the agent console application detects host network configuration issues, the installation workflow will be halted until the user manually stops the console application and signals the intention to proceed. Procedure Wait for the agent console application to check whether or not the configured release image can be pulled from a registry. If the agent console application states that the installer connectivity checks have passed, wait for the prompt to time out to continue with the installation. Note You can still choose to view or change network configuration settings even if the connectivity checks have passed. However, if you choose to interact with the agent console application rather than letting it time out, you must manually quit the TUI to proceed with the installation. If the agent console application checks have failed, which is indicated by a red icon beside the Release image URL pull check, use the following steps to reconfigure the host's network settings: Read the Check Errors section of the TUI. This section displays error messages specific to the failed checks. Select Configure network to launch the NetworkManager TUI. Select Edit a connection and select the connection you want to reconfigure. Edit the configuration and select OK to save your changes. Select Back to return to the main screen of the NetworkManager TUI. Select Activate a Connection . Select the reconfigured network to deactivate it. Select the reconfigured network again to reactivate it. Select Back and then select Quit to return to the agent console application. Wait at least five seconds for the continuous network checks to restart using the new network configuration. If the Release image URL pull check succeeds and displays a green icon beside the URL, select Quit to exit the agent console application and continue with the installation. 3.2.5. Tracking and verifying installation progress Use the following procedure to track installation progress and to verify a successful installation. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Optional: To know when the bootstrap host (rendezvous host) reboots, run the following command: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <install_directory> , specify the path to the directory where the agent ISO was generated. 2 To view different installation details, specify warn , debug , or error instead of info . Example output ................................................................... ................................................................... INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. To track the progress and verify successful installation, run the following command: USD openshift-install --dir <install_directory> agent wait-for install-complete 1 1 For <install_directory> directory, specify the path to the directory where the agent ISO was generated. Example output ................................................................... ................................................................... INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com 3.3. Gathering log data from a failed Agent-based installation Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Run the following command and collect the output: USD ./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug Example error message ... ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded If the output from the command indicates a failure, or if the bootstrap is not progressing, run the following command to connect to the rendezvous host and collect the output: USD ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz Note Red Hat Support can diagnose most issues using the data gathered from the rendezvous host, but if some hosts are not able to register, gathering this data from every host might be helpful. If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug If the output from the command indicates a failure, perform the following steps: Export the kubeconfig file to your environment by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Gather information for debugging by running the following command: USD oc adm must-gather Create a compressed file from the must-gather directory that was just created in your working directory by running the following command: USD tar cvaf must-gather.tar.gz <must_gather_directory> Excluding the /auth subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal . Attach all other data gathered from this procedure to your support case. | [
"mkdir ~/<directory_name>",
"cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF",
"cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF",
"openshift-install --dir <install_directory> agent create image",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2",
"................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete",
"openshift-install --dir <install_directory> agent wait-for install-complete 1",
"................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com",
"./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug",
"ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded",
"ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz",
"./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz <must_gather_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-with-agent-basic |
1.3. Scheduling Policies | 1.3. Scheduling Policies A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. The Red Hat Virtualization Manager provides five default scheduling policies: Evenly_Distributed , Cluster_Maintenance , None , Power_Saving , and VM_Evenly_Distributed . You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Section 8.2.5, "Scheduling Policy Settings Explained" for more information about the properties of each scheduling policy. Figure 1.4. Evenly Distributed Scheduling Policy The Evenly_Distributed scheduling policy distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . The VM_Evenly_Distributed scheduling policy virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Figure 1.5. Power Saving Scheduling Policy The Power_Saving scheduling policy distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. Set the None policy to have no load or power sharing between hosts for running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . The Cluster_Maintenance scheduling policy limits activity in a cluster during maintenance tasks. When the Cluster_Maintenance policy is set, no new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. 1.3.1. Creating a Scheduling Policy You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Virtualization environment. Creating a Scheduling Policy Click Administration Configure . Click the Scheduling Policies tab. Click New . Enter a Name and Description for the scheduling policy. Configure filter modules: In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section. Specific filter modules can also be set as the First , to be given highest priority, or Last , to be given lowest priority, for basic optimization. To set the priority, right-click any filter module, hover the cursor over Position and select First or Last . Configure weight modules: In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section. Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules. Specify a load balancing policy: From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy. From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value. Use the + and - buttons to add or remove additional properties. Click OK . 1.3.2. Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows. Table 1.12. New Scheduling Policy and Edit Scheduling Policy Settings Field Name Description Name The name of the scheduling policy. This is the name used to refer to the scheduling policy in the Red Hat Virtualization Manager. Description A description of the scheduling policy. This field is recommended but not mandatory. Filter Modules A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below: CpuPinning : Hosts which do not satisfy the CPU pinning definition. Migration : Prevent migration to the same host. PinToHost : Hosts other than the host to which the virtual machine is pinned. CPU-Level : Hosts that do not meet the CPU topology of the virtual machine. CPU : Hosts with fewer CPUs than the number assigned to the virtual machine. Memory : Hosts that do not have sufficient memory to run the virtual machine. VmAffinityGroups : Hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on the same host or on separate hosts. VmToHostsAffinityGroups : Group of hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on one of the hosts in a group or on a separate host that is excluded from the group. InClusterUpgrade : Hosts that are running an earlier operating system than the host that the virtual machine currently runs on. HostDevice : Hosts that do not support host devices required by the virtual machine. HA : Forces the Manager virtual machine in a self-hosted engine environment to run only on hosts with a positive high availability score. Emulated-Machine : Hosts which do not have proper emulated machine support. Network : Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed. HostedEnginesSpares : Reserves space for the Manager virtual machine on a specified number of self-hosted engine nodes. Label : Hosts that do not have the required affinity labels. Compatibility-Version : Runs virtual machines only on hosts with the correct compatibility version support. CPUOverloaded : Hosts that are CPU overloaded. Weights Modules A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. InClusterUpgrade : Weight hosts in accordance with their operating system version. The weight penalizes hosts with earlier operating systems more than hosts with the same operating system as the host that the virtual machine is currently running on. This ensures that priority is always given to hosts with later operating systems. OptimalForHaReservation : Weights hosts in accordance with their high availability score. None : Weights hosts in accordance with the even distribution module. OptimalForEvenGuestDistribution : Weights hosts in accordance with the number of virtual machines running on those hosts. VmAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group. VmToHostsAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on one of the hosts in a group or on a separate host that is excluded from the group. OptimalForCPUPowerSaving : Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage. OptimalForEvenCpuDistribution : Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage. HA : Weights hosts in accordance with their high availability score. PreferredHosts : Preferred hosts have priority during virtual machine setup. OptimalForMemoryPowerSaving : Weights hosts in accordance with their memory usage, giving priority to hosts with lower available memory. OptimalForMemoryEvenDistribution : Weights hosts in accordance with their memory usage, giving priority to hosts with higher available memory. Load Balancer This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage. Properties This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-scheduling_policies |
Chapter 7. Uninstalling a cluster on Azure Stack Hub | Chapter 7. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub |
Chapter 79. volume | Chapter 79. volume This chapter describes the commands under the volume command. 79.1. volume attachment complete Complete an attachment for a volume. Usage: Table 79.1. Positional arguments Value Summary <attachment> Id of volume attachment to mark as completed Table 79.2. Command arguments Value Summary -h, --help Show this help message and exit 79.2. volume attachment create Create an attachment for a volume. This command will only create a volume attachment in the Volume service. It will not invoke the necessary Compute service actions to actually attach the volume to the server at the hypervisor level. As a result, it should typically only be used for troubleshooting issues with an existing server in combination with other tooling. For all other use cases, the server add volume command should be preferred. Usage: Table 79.3. Positional arguments Value Summary <volume> Name or id of volume to attach to server. <server> Name or id of server to attach volume to. Table 79.4. Command arguments Value Summary -h, --help Show this help message and exit --connect Make an active connection using provided connector info --no-connect Do not make an active connection using provided connector info --initiator <initiator> Iqn of the initiator attaching to --ip <ip> Ip of the system attaching to --host <host> Name of the host attaching to --platform <platform> Platform type --os-type <ostype> Os type --multipath Use multipath --no-multipath Use multipath --mountpoint <mountpoint> Mountpoint volume will be attached at --mode <mode> Mode of volume attachment, rw, ro and null, where null indicates we want to honor any existing admin-metadata settings (supported by --os-volume-api-version 3.54 or later) Table 79.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.3. volume attachment delete Delete an attachment for a volume. Similarly to the volume attachment create command, this command will only delete the volume attachment record in the Volume service. It will not invoke the necessary Compute service actions to actually attach the volume to the server at the hypervisor level. As a result, it should typically only be used for troubleshooting issues with an existing server in combination with other tooling. For all other use cases, the server volume remove command should be preferred. Usage: Table 79.9. Positional arguments Value Summary <attachment> Id of volume attachment to delete Table 79.10. Command arguments Value Summary -h, --help Show this help message and exit 79.4. volume attachment list Lists all volume attachments. Usage: Table 79.11. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --all-projects Shows details for all projects (admin only). --volume-id <volume-id> Filters results by a volume id. this option is deprecated. Consider using the --filters option which was introduced in microversion 3.33 instead. --status <status> Filters results by a status. this option is deprecated. Consider using the --filters option which was introduced in microversion 3.33 instead. --marker <marker> Begin returning volume attachments that appear later in volume attachment list than that represented by this ID. --limit <limit> Maximum number of volume attachments to return. Table 79.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.5. volume attachment set Update an attachment for a volume. This call is designed to be more of an volume attachment completion than anything else. It expects the value of a connector object to notify the driver that the volume is going to be connected and where it's being connected to. Usage: Table 79.16. Positional arguments Value Summary <attachment> Id of volume attachment. Table 79.17. Command arguments Value Summary -h, --help Show this help message and exit --initiator <initiator> Iqn of the initiator attaching to --ip <ip> Ip of the system attaching to --host <host> Name of the host attaching to --platform <platform> Platform type --os-type <ostype> Os type --multipath Use multipath --no-multipath Use multipath --mountpoint <mountpoint> Mountpoint volume will be attached at Table 79.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.6. volume attachment show Show detailed information for a volume attachment. Usage: Table 79.22. Positional arguments Value Summary <attachment> Id of volume attachment. Table 79.23. Command arguments Value Summary -h, --help Show this help message and exit Table 79.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.7. volume backend capability show Show capability command Usage: Table 79.28. Positional arguments Value Summary <host> List capabilities of specified host (host@backend- name) Table 79.29. Command arguments Value Summary -h, --help Show this help message and exit Table 79.30. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.31. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.8. volume backend pool list List pool command Usage: Table 79.34. Command arguments Value Summary -h, --help Show this help message and exit --long Show detailed information about pools. Table 79.35. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.36. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.9. volume backup create Create new volume backup Usage: Table 79.39. Positional arguments Value Summary <volume> Volume to backup (name or id) Table 79.40. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the backup --description <description> Description of the backup --container <container> Optional backup container name --snapshot <snapshot> Snapshot to backup (name or id) --force Allow to back up an in-use volume --incremental Perform an incremental backup --no-incremental Do not perform an incremental backup --property <key=value> Set a property on this backup (repeat option to remove multiple values) (supported by --os-volume-api-version 3.43 or above) --availability-zone <zone-name> Az where the backup should be stored; by default it will be the same as the source (supported by --os- volume-api-version 3.51 or above) Table 79.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.10. volume backup delete Delete volume backup(s) Usage: Table 79.45. Positional arguments Value Summary <backup> Backup(s) to delete (name or id) Table 79.46. Command arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 79.11. volume backup list List volume backups Usage: Table 79.47. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --name <name> Filters results by the backup name --status <status> Filters results by the backup status, one of: creating, available, deleting, error, restoring or error_restoring --volume <volume> Filters results by the volume which they backup (name or ID) --marker <volume-backup> The last backup of the page (name or id) --limit <num-backups> Maximum number of backups to display --all-projects Include all projects (admin only) Table 79.48. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.49. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.50. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.51. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.12. volume backup record export Export volume backup details. Backup information can be imported into a new service instance to be able to restore. Usage: Table 79.52. Positional arguments Value Summary <backup> Backup to export (name or id) Table 79.53. Command arguments Value Summary -h, --help Show this help message and exit Table 79.54. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.55. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.56. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.13. volume backup record import Import volume backup details. Exported backup details contain the metadata necessary to restore to a new or rebuilt service instance Usage: Table 79.58. Positional arguments Value Summary <backup_service> Backup service containing the backup. <backup_metadata> Encoded backup metadata from export. Table 79.59. Command arguments Value Summary -h, --help Show this help message and exit Table 79.60. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.61. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.62. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.63. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.14. volume backup restore Restore volume backup Usage: Table 79.64. Positional arguments Value Summary <backup> Backup to restore (name or id) <volume> Volume to restore to (name or id for existing volume, name only for new volume) (default to None) Table 79.65. Command arguments Value Summary -h, --help Show this help message and exit --force Restore the backup to an existing volume (default to False) Table 79.66. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.67. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.68. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.69. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.15. volume backup set Set volume backup properties Usage: Table 79.70. Positional arguments Value Summary <backup> Backup to modify (name or id) Table 79.71. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New backup name(supported by --os-volume-api-version 3.9 or above) --description <description> New backup description (supported by --os-volume-api- version 3.9 or above) --state <state> New backup state ("available" or "error") (admin only) (This option simply changes the state of the backup in the database with no regard to actual status; exercise caution when using) --no-property Remove all properties from this backup (specify both --no-property and --property to remove the current properties before setting new properties) --property <key=value> Set a property on this backup (repeat option to set multiple values) (supported by --os-volume-api-version 3.43 or above) 79.16. volume backup show Display volume backup details Usage: Table 79.72. Positional arguments Value Summary <backup> Backup to display (name or id) Table 79.73. Command arguments Value Summary -h, --help Show this help message and exit Table 79.74. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.75. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.76. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.77. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.17. volume backup unset Unset volume backup properties. This command requires ``--os-volume-api- version`` 3.43 or greater. Usage: Table 79.78. Positional arguments Value Summary <backup> Backup to modify (name or id) Table 79.79. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from this backup (repeat option to unset multiple values) 79.18. volume create Create new volume Usage: Table 79.80. Positional arguments Value Summary <name> Volume name Table 79.81. Command arguments Value Summary -h, --help Show this help message and exit --size <size> Volume size in gb (required unless --snapshot, --source or --backup is specified) --type <volume-type> Set the type of volume --image <image> Use <image> as source of volume (name or id) --snapshot <snapshot> Use <snapshot> as source of volume (name or id) --source <volume> Volume to clone (name or id) --backup <backup> Restore backup to a volume (name or id) (supported by --os-volume-api-version 3.47 or later) --description <description> Volume description --availability-zone <availability-zone> Create volume in <availability-zone> --consistency-group consistency-group> Consistency group where the new volume belongs to --property <key=value> Set a property to this volume (repeat option to set multiple properties) --hint <key=value> Arbitrary scheduler hint key-value pairs to help boot an instance (repeat option to set multiple hints) --bootable Mark volume as bootable --non-bootable Mark volume as non-bootable (default) --read-only Set volume to read-only access mode --read-write Set volume to read-write access mode (default) Table 79.82. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.83. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.84. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.85. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.19. volume delete Delete volume(s) Usage: Table 79.86. Positional arguments Value Summary <volume> Volume(s) to delete (name or id) Table 79.87. Command arguments Value Summary -h, --help Show this help message and exit --force Attempt forced removal of volume(s), regardless of state (defaults to False) --purge Remove any snapshots along with volume(s) (defaults to false) 79.20. volume group create Create a volume group. Generic volume groups enable you to create a group of volumes and manage them together. Generic volume groups are more flexible than consistency groups. Currently volume consistency groups only support consistent group snapshot. It cannot be extended easily to serve other purposes. A project may want to put volumes used in the same application together in a group so that it is easier to manage them together, and this group of volumes may or may not support consistent group snapshot. Generic volume group solve this problem. By decoupling the tight relationship between the group construct and the consistency concept, generic volume groups can be extended to support other features in the future. This command requires ``--os-volume-api-version`` 3.13 or greater. Usage: Table 79.88. Command arguments Value Summary -h, --help Show this help message and exit --volume-group-type <volume_group_type> Volume group type to use (name or id) --volume-type <volume_type> Volume type(s) to use (name or id) (required with --volume-group-type) --source-group <source-group> Existing volume group to use (name or id) (supported by --os-volume-api-version 3.14 or later) --group-snapshot <group-snapshot> Existing group snapshot to use (name or id) (supported by --os-volume-api-version 3.14 or later) --name <name> Name of the volume group. --description <description> Description of a volume group. --availability-zone <availability-zone> Availability zone for volume group. (not available if creating group from source) Table 79.89. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.90. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.91. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.92. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.21. volume group delete Delete a volume group. This command requires ``--os-volume-api-version`` 3.13 or greater. Usage: Table 79.93. Positional arguments Value Summary <group> Name or id of volume group to delete Table 79.94. Command arguments Value Summary -h, --help Show this help message and exit --force Delete the volume group even if it contains volumes. this will delete any remaining volumes in the group. 79.22. volume group failover Failover replication for a volume group. This command requires ``--os-volume- api-version`` 3.38 or greater. Usage: Table 79.95. Positional arguments Value Summary <group> Name or id of volume group to failover replication for. Table 79.96. Command arguments Value Summary -h, --help Show this help message and exit --allow-attached-volume Allow group with attached volumes to be failed over. --disallow-attached-volume Disallow group with attached volumes to be failed over. --secondary-backend-id <backend_id> Secondary backend id. 79.23. volume group list Lists all volume groups. This command requires ``--os-volume-api-version`` 3.13 or greater. Usage: Table 79.97. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Shows details for all projects (admin only). Table 79.98. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.99. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.100. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.101. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.24. volume group set Update a volume group. This command requires ``--os-volume-api-version`` 3.13 or greater. Usage: Table 79.102. Positional arguments Value Summary <group> Name or id of volume group. Table 79.103. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New name for group. --description <description> New description for group. --enable-replication Enable replication for group. (supported by --os- volume-api-version 3.38 or above) --disable-replication Disable replication for group. (supported by --os- volume-api-version 3.38 or above) Table 79.104. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.105. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.106. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.107. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.25. volume group show Show detailed information for a volume group. This command requires ``--os- volume-api-version`` 3.13 or greater. Usage: Table 79.108. Positional arguments Value Summary <group> Name or id of volume group. Table 79.109. Command arguments Value Summary -h, --help Show this help message and exit --volumes Show volumes included in the group. (supported by --os-volume-api-version 3.25 or above) --no-volumes Do not show volumes included in the group. (supported by --os-volume-api-version 3.25 or above) --replication-targets Show replication targets for the group. (supported by --os-volume-api-version 3.38 or above) --no-replication-targets Do not show replication targets for the group. (supported by --os-volume-api-version 3.38 or above) Table 79.110. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.111. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.112. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.113. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.26. volume group snapshot create Create a volume group snapshot. This command requires ``--os-volume-api- version`` 3.13 or greater. Usage: Table 79.114. Positional arguments Value Summary <volume_group> Name or id of volume group to create a snapshot of. Table 79.115. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the volume group snapshot. --description <description> Description of a volume group snapshot. Table 79.116. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.117. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.118. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.119. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.27. volume group snapshot delete Delete a volume group snapshot. This command requires ``--os-volume-api- version`` 3.14 or greater. Usage: Table 79.120. Positional arguments Value Summary <snapshot> Name or id of volume group snapshot to delete Table 79.121. Command arguments Value Summary -h, --help Show this help message and exit 79.28. volume group snapshot list Lists all volume group snapshot. This command requires ``--os-volume-api- version`` 3.14 or greater. Usage: Table 79.122. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Shows details for all projects (admin only). Table 79.123. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.124. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.125. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.126. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.29. volume group snapshot show Show detailed information for a volume group snapshot. This command requires ``--os-volume-api-version`` 3.14 or greater. Usage: Table 79.127. Positional arguments Value Summary <snapshot> Name or id of volume group snapshot. Table 79.128. Command arguments Value Summary -h, --help Show this help message and exit Table 79.129. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.130. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.131. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.132. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.30. volume group type create Create a volume group type. This command requires ``--os-volume-api-version`` 3.11 or greater. Usage: Table 79.133. Positional arguments Value Summary <name> Name of new volume group type. Table 79.134. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the volume group type. --public Volume group type is available to other projects (default) --private Volume group type is not available to other projects Table 79.135. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.136. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.137. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.138. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.31. volume group type delete Delete a volume group type. This command requires ``--os-volume-api-version`` 3.11 or greater. Usage: Table 79.139. Positional arguments Value Summary <group_type> Name or id of volume group type to delete Table 79.140. Command arguments Value Summary -h, --help Show this help message and exit 79.32. volume group type list Lists all volume group types. This command requires ``--os-volume-api- version`` 3.11 or greater. Usage: Table 79.141. Command arguments Value Summary -h, --help Show this help message and exit --default List the default volume group type. Table 79.142. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.143. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.144. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.145. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.33. volume group type set Update a volume group type. This command requires ``--os-volume-api-version`` 3.11 or greater. Usage: Table 79.146. Positional arguments Value Summary <group_type> Name or id of volume group type. Table 79.147. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New name for volume group type. --description <description> New description for volume group type. --public Make volume group type available to other projects. --private Make volume group type unavailable to other projects. --no-property Remove all properties from this volume group type (specify both --no-property and --property to remove the current properties before setting new properties) --property <key=value> Property to add or modify for this volume group type (repeat option to set multiple properties) Table 79.148. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.149. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.150. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.151. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.34. volume group type show Show detailed information for a volume group type. This command requires ``--os-volume-api-version`` 3.11 or greater. Usage: Table 79.152. Positional arguments Value Summary <group_type> Name or id of volume group type. Table 79.153. Command arguments Value Summary -h, --help Show this help message and exit Table 79.154. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.155. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.156. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.157. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.35. volume host set Set volume host properties Usage: Table 79.158. Positional arguments Value Summary <host-name> Name of volume host Table 79.159. Command arguments Value Summary -h, --help Show this help message and exit --disable Freeze and disable the specified volume host --enable Thaw and enable the specified volume host 79.36. volume list List volumes Usage: Table 79.160. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Filter results by user (name or id) (admin only) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --name <name> Filter results by volume name --status <status> Filter results by status --all-projects Include all projects (admin only) --long List additional fields in output --marker <volume> The last volume id of the page --limit <num-volumes> Maximum number of volumes to display Table 79.161. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.162. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.163. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.164. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.37. volume message delete Delete a volume failure message Usage: Table 79.165. Positional arguments Value Summary <message-id> Message(s) to delete (id) Table 79.166. Command arguments Value Summary -h, --help Show this help message and exit 79.38. volume message list List volume failure messages Usage: Table 79.167. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --marker <message-id> The last message id of the page --limit <limit> Maximum number of messages to display Table 79.168. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.169. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.170. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.171. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.39. volume message show Show a volume failure message Usage: Table 79.172. Positional arguments Value Summary <message-id> Message to show (id). Table 79.173. Command arguments Value Summary -h, --help Show this help message and exit Table 79.174. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.175. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.176. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.177. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.40. volume migrate Migrate volume to a new host Usage: Table 79.178. Positional arguments Value Summary <volume> Volume to migrate (name or id) Table 79.179. Command arguments Value Summary -h, --help Show this help message and exit --host <host> Destination host (takes the form: host@backend-name#pool) --force-host-copy Enable generic host-based force-migration, which bypasses driver optimizations --lock-volume If specified, the volume state will be locked and will not allow a migration to be aborted (possibly by another operation) 79.41. volume qos associate Associate a QoS specification to a volume type Usage: Table 79.180. Positional arguments Value Summary <qos-spec> Qos specification to modify (name or id) <volume-type> Volume type to associate the qos (name or id) Table 79.181. Command arguments Value Summary -h, --help Show this help message and exit 79.42. volume qos create Create new QoS specification Usage: Table 79.182. Positional arguments Value Summary <name> New qos specification name Table 79.183. Command arguments Value Summary -h, --help Show this help message and exit --consumer <consumer> Consumer of the qos. valid consumers: back-end, both, front-end (defaults to both ) --property <key=value> Set a qos specification property (repeat option to set multiple properties) Table 79.184. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.185. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.186. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.187. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.43. volume qos delete Delete QoS specification Usage: Table 79.188. Positional arguments Value Summary <qos-spec> Qos specification(s) to delete (name or id) Table 79.189. Command arguments Value Summary -h, --help Show this help message and exit --force Allow to delete in-use qos specification(s) 79.44. volume qos disassociate Disassociate a QoS specification from a volume type Usage: Table 79.190. Positional arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 79.191. Command arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type to disassociate the qos from (name or id) --all Disassociate the qos from every volume type 79.45. volume qos list List QoS specifications Usage: Table 79.192. Command arguments Value Summary -h, --help Show this help message and exit Table 79.193. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.194. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.195. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.196. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.46. volume qos set Set QoS specification properties Usage: Table 79.197. Positional arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 79.198. Command arguments Value Summary -h, --help Show this help message and exit --property <key=value> Property to add or modify for this qos specification (repeat option to set multiple properties) 79.47. volume qos show Display QoS specification details Usage: Table 79.199. Positional arguments Value Summary <qos-spec> Qos specification to display (name or id) Table 79.200. Command arguments Value Summary -h, --help Show this help message and exit Table 79.201. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.202. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.203. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.204. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.48. volume qos unset Unset QoS specification properties Usage: Table 79.205. Positional arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 79.206. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from the qos specification. (repeat option to unset multiple properties) 79.49. volume revert Revert a volume to a snapshot. Usage: Table 79.207. Positional arguments Value Summary <snapshot> Name or id of the snapshot to restore. the snapshot must be the most recent one known to cinder. Table 79.208. Command arguments Value Summary -h, --help Show this help message and exit 79.50. volume service list List service command Usage: Table 79.209. Command arguments Value Summary -h, --help Show this help message and exit --host <host> List services on specified host (name only) --service <service> List only specified service (name only) --long List additional fields in output Table 79.210. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.211. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.212. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.213. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.51. volume service set Set volume service properties Usage: Table 79.214. Positional arguments Value Summary <host> Name of host <service> Name of service (binary name) Table 79.215. Command arguments Value Summary -h, --help Show this help message and exit --enable Enable volume service --disable Disable volume service --disable-reason <reason> Reason for disabling the service (should be used with --disable option) 79.52. volume set Set volume properties Usage: Table 79.216. Positional arguments Value Summary <volume> Volume to modify (name or id) Table 79.217. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New volume name --size <size> Extend volume size in gb --description <description> New volume description --no-property Remove all properties from <volume> (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Set a property on this volume (repeat option to set multiple properties) --image-property <key=value> Set an image property on this volume (repeat option to set multiple image properties) --state <state> New volume state ("available", "error", "creating", "deleting", "in-use", "attaching", "detaching", "error_deleting" or "maintenance") (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --attached Set volume attachment status to "attached" (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --detached Set volume attachment status to "detached" (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --type <volume-type> New volume type (name or id) --retype-policy <retype-policy> Migration policy while re-typing volume ("never" or "on-demand", default is "never" ) (available only when --type option is specified) --bootable Mark volume as bootable --non-bootable Mark volume as non-bootable --read-only Set volume to read-only access mode --read-write Set volume to read-write access mode 79.53. volume show Display volume details Usage: Table 79.218. Positional arguments Value Summary <volume> Volume to display (name or id) Table 79.219. Command arguments Value Summary -h, --help Show this help message and exit Table 79.220. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.221. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.222. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.223. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.54. volume snapshot create Create new volume snapshot Usage: Table 79.224. Positional arguments Value Summary <snapshot-name> Name of the new snapshot Table 79.225. Command arguments Value Summary -h, --help Show this help message and exit --volume <volume> Volume to snapshot (name or id) (default is <snapshot- name>) --description <description> Description of the snapshot --force Create a snapshot attached to an instance. default is False --property <key=value> Set a property to this snapshot (repeat option to set multiple properties) --remote-source <key=value> The attribute(s) of the existing remote volume snapshot (admin required) (repeat option to specify multiple attributes) e.g.: --remote-source source- name=test_name --remote-source source-id=test_id Table 79.226. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.227. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.228. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.229. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.55. volume snapshot delete Delete volume snapshot(s) Usage: Table 79.230. Positional arguments Value Summary <snapshot> Snapshot(s) to delete (name or id) Table 79.231. Command arguments Value Summary -h, --help Show this help message and exit --force Attempt forced removal of snapshot(s), regardless of state (defaults to False) 79.56. volume snapshot list List volume snapshots Usage: Table 79.232. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Include all projects (admin only) --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --long List additional fields in output --marker <volume-snapshot> The last snapshot id of the page --limit <num-snapshots> Maximum number of snapshots to display --name <name> Filters results by a name. --status <status> Filters results by a status. ( available , error , creating , deleting or error_deleting ) --volume <volume> Filters results by a volume (name or id). Table 79.233. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.234. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.235. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.236. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.57. volume snapshot set Set volume snapshot properties Usage: Table 79.237. Positional arguments Value Summary <snapshot> Snapshot to modify (name or id) Table 79.238. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New snapshot name --description <description> New snapshot description --no-property Remove all properties from <snapshot> (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Property to add/change for this snapshot (repeat option to set multiple properties) --state <state> New snapshot state. ("available", "error", "creating", "deleting", or "error_deleting") (admin only) (This option simply changes the state of the snapshot in the database with no regard to actual status, exercise caution when using) 79.58. volume snapshot show Display volume snapshot details Usage: Table 79.239. Positional arguments Value Summary <snapshot> Snapshot to display (name or id) Table 79.240. Command arguments Value Summary -h, --help Show this help message and exit Table 79.241. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.242. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.243. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.244. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.59. volume snapshot unset Unset volume snapshot properties Usage: Table 79.245. Positional arguments Value Summary <snapshot> Snapshot to modify (name or id) Table 79.246. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from snapshot (repeat option to remove multiple properties) 79.60. volume summary Show a summary of all volumes in this deployment. Usage: Table 79.247. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Include all projects (admin only) Table 79.248. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.249. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.250. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.251. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.61. volume transfer request accept Accept volume transfer request. Usage: Table 79.252. Positional arguments Value Summary <transfer-request-id> Volume transfer request to accept (id only) Table 79.253. Command arguments Value Summary -h, --help Show this help message and exit --auth-key <key> Volume transfer request authentication key Table 79.254. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.255. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.256. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.257. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.62. volume transfer request create Create volume transfer request. Usage: Table 79.258. Positional arguments Value Summary <volume> Volume to transfer (name or id) Table 79.259. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New transfer request name (default to none) --snapshots Allow transfer volumes without snapshots (default) (supported by --os-volume-api-version 3.55 or later) --no-snapshots Disallow transfer volumes without snapshots (supported by --os-volume-api-version 3.55 or later) Table 79.260. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.261. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.262. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.263. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.63. volume transfer request delete Delete volume transfer request(s). Usage: Table 79.264. Positional arguments Value Summary <transfer-request> Volume transfer request(s) to delete (name or id) Table 79.265. Command arguments Value Summary -h, --help Show this help message and exit 79.64. volume transfer request list Lists all volume transfer requests. Usage: Table 79.266. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Include all projects (admin only) Table 79.267. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.268. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.269. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.270. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.65. volume transfer request show Show volume transfer request details. Usage: Table 79.271. Positional arguments Value Summary <transfer-request> Volume transfer request to display (name or id) Table 79.272. Command arguments Value Summary -h, --help Show this help message and exit Table 79.273. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.274. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.275. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.276. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.66. volume type create Create new volume type Usage: Table 79.277. Positional arguments Value Summary <name> Volume type name Table 79.278. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Volume type description --public Volume type is accessible to the public --private Volume type is not accessible to the public --property <key=value> Set a property on this volume type (repeat option to set multiple properties) --project <project> Allow <project> to access private type (name or id) (Must be used with --private option) --encryption-provider <provider> Set the encryption provider format for this volume type (e.g "luks" or "plain") (admin only) (This option is required when setting encryption type of a volume. Consider using other encryption options such as: "-- encryption-cipher", "--encryption-key-size" and "-- encryption-control-location") --encryption-cipher <cipher> Set the encryption algorithm or mode for this volume type (e.g "aes-xts-plain64") (admin only) --encryption-key-size <key-size> Set the size of the encryption key of this volume type (e.g "128" or "256") (admin only) --encryption-control-location <control-location> Set the notional service where the encryption is performed ("front-end" or "back-end") (admin only) (The default value for this option is "front-end" when setting encryption type of a volume. Consider using other encryption options such as: "--encryption- cipher", "--encryption-key-size" and "--encryption- provider") --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 79.279. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.280. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.281. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.282. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.67. volume type delete Delete volume type(s) Usage: Table 79.283. Positional arguments Value Summary <volume-type> Volume type(s) to delete (name or id) Table 79.284. Command arguments Value Summary -h, --help Show this help message and exit 79.68. volume type list List volume types Usage: Table 79.285. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --default List the default volume type --public List only public types --private List only private types (admin only) --encryption-type Display encryption information for each volume type (admin only) Table 79.286. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 79.287. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 79.288. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.289. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.69. volume type set Set volume type properties Usage: Table 79.290. Positional arguments Value Summary <volume-type> Volume type to modify (name or id) Table 79.291. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set volume type name --description <description> Set volume type description --property <key=value> Set a property on this volume type (repeat option to set multiple properties) --project <project> Set volume type access to project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --encryption-provider <provider> Set the encryption provider format for this volume type (e.g "luks" or "plain") (admin only) (This option is required when setting encryption type of a volume for the first time. Consider using other encryption options such as: "--encryption-cipher", "--encryption- key-size" and "--encryption-control-location") --encryption-cipher <cipher> Set the encryption algorithm or mode for this volume type (e.g "aes-xts-plain64") (admin only) --encryption-key-size <key-size> Set the size of the encryption key of this volume type (e.g "128" or "256") (admin only) --encryption-control-location <control-location> Set the notional service where the encryption is performed ("front-end" or "back-end") (admin only) (The default value for this option is "front-end" when setting encryption type of a volume for the first time. Consider using other encryption options such as: "--encryption-cipher", "--encryption-key-size" and "-- encryption-provider") 79.70. volume type show Display volume type details Usage: Table 79.292. Positional arguments Value Summary <volume-type> Volume type to display (name or id) Table 79.293. Command arguments Value Summary -h, --help Show this help message and exit --encryption-type Display encryption information of this volume type (admin only) Table 79.294. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 79.295. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 79.296. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 79.297. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 79.71. volume type unset Unset volume type properties Usage: Table 79.298. Positional arguments Value Summary <volume-type> Volume type to modify (name or id) Table 79.299. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Remove a property from this volume type (repeat option to remove multiple properties) --project <project> Removes volume type access to project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --encryption-type Remove the encryption type for this volume type (admin only) 79.72. volume unset Unset volume properties Usage: Table 79.300. Positional arguments Value Summary <volume> Volume to modify (name or id) Table 79.301. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Remove a property from volume (repeat option to remove multiple properties) --image-property <key> Remove an image property from volume (repeat option to remove multiple image properties) | [
"openstack volume attachment complete [-h] <attachment>",
"openstack volume attachment create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--connect] [--no-connect] [--initiator <initiator>] [--ip <ip>] [--host <host>] [--platform <platform>] [--os-type <ostype>] [--multipath] [--no-multipath] [--mountpoint <mountpoint>] [--mode <mode>] <volume> <server>",
"openstack volume attachment delete [-h] <attachment>",
"openstack volume attachment list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--all-projects] [--volume-id <volume-id>] [--status <status>] [--marker <marker>] [--limit <limit>]",
"openstack volume attachment set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--initiator <initiator>] [--ip <ip>] [--host <host>] [--platform <platform>] [--os-type <ostype>] [--multipath] [--no-multipath] [--mountpoint <mountpoint>] <attachment>",
"openstack volume attachment show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <attachment>",
"openstack volume backend capability show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <host>",
"openstack volume backend pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]",
"openstack volume backup create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--container <container>] [--snapshot <snapshot>] [--force] [--incremental] [--no-incremental] [--property <key=value>] [--availability-zone <zone-name>] <volume>",
"openstack volume backup delete [-h] [--force] <backup> [<backup> ...]",
"openstack volume backup list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--name <name>] [--status <status>] [--volume <volume>] [--marker <volume-backup>] [--limit <num-backups>] [--all-projects]",
"openstack volume backup record export [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <backup>",
"openstack volume backup record import [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <backup_service> <backup_metadata>",
"openstack volume backup restore [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--force] <backup> [<volume>]",
"openstack volume backup set [-h] [--name <name>] [--description <description>] [--state <state>] [--no-property] [--property <key=value>] <backup>",
"openstack volume backup show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <backup>",
"openstack volume backup unset [-h] [--property <key>] <backup>",
"openstack volume create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--size <size>] [--type <volume-type>] [--image <image> | --snapshot <snapshot> | --source <volume> | --backup <backup>] [--description <description>] [--availability-zone <availability-zone>] [--consistency-group consistency-group>] [--property <key=value>] [--hint <key=value>] [--bootable | --non-bootable] [--read-only | --read-write] [<name>]",
"openstack volume delete [-h] [--force | --purge] <volume> [<volume> ...]",
"openstack volume group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--volume-group-type <volume_group_type>] [--volume-type <volume_type>] [--source-group <source-group>] [--group-snapshot <group-snapshot>] [--name <name>] [--description <description>] [--availability-zone <availability-zone>]",
"openstack volume group delete [-h] [--force] <group>",
"openstack volume group failover [-h] [--allow-attached-volume] [--disallow-attached-volume] [--secondary-backend-id <backend_id>] <group>",
"openstack volume group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects]",
"openstack volume group set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--enable-replication] [--disable-replication] <group>",
"openstack volume group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--volumes] [--no-volumes] [--replication-targets] [--no-replication-targets] <group>",
"openstack volume group snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] <volume_group>",
"openstack volume group snapshot delete [-h] <snapshot>",
"openstack volume group snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects]",
"openstack volume group snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <snapshot>",
"openstack volume group type create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--public | --private] <name>",
"openstack volume group type delete [-h] <group_type>",
"openstack volume group type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--default]",
"openstack volume group type set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--public | --private] [--no-property] [--property <key=value>] <group_type>",
"openstack volume group type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <group_type>",
"openstack volume host set [-h] [--disable | --enable] <host-name>",
"openstack volume list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>] [--name <name>] [--status <status>] [--all-projects] [--long] [--marker <volume>] [--limit <num-volumes>]",
"openstack volume message delete [-h] <message-id> [<message-id> ...]",
"openstack volume message list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--marker <message-id>] [--limit <limit>]",
"openstack volume message show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <message-id>",
"openstack volume migrate [-h] --host <host> [--force-host-copy] [--lock-volume] <volume>",
"openstack volume qos associate [-h] <qos-spec> <volume-type>",
"openstack volume qos create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consumer <consumer>] [--property <key=value>] <name>",
"openstack volume qos delete [-h] [--force] <qos-spec> [<qos-spec> ...]",
"openstack volume qos disassociate [-h] [--volume-type <volume-type> | --all] <qos-spec>",
"openstack volume qos list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack volume qos set [-h] [--property <key=value>] <qos-spec>",
"openstack volume qos show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <qos-spec>",
"openstack volume qos unset [-h] [--property <key>] <qos-spec>",
"openstack volume revert [-h] <snapshot>",
"openstack volume service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--host <host>] [--service <service>] [--long]",
"openstack volume service set [-h] [--enable | --disable] [--disable-reason <reason>] <host> <service>",
"openstack volume set [-h] [--name <name>] [--size <size>] [--description <description>] [--no-property] [--property <key=value>] [--image-property <key=value>] [--state <state>] [--attached | --detached] [--type <volume-type>] [--retype-policy <retype-policy>] [--bootable | --non-bootable] [--read-only | --read-write] <volume>",
"openstack volume show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <volume>",
"openstack volume snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--volume <volume>] [--description <description>] [--force] [--property <key=value>] [--remote-source <key=value>] <snapshot-name>",
"openstack volume snapshot delete [-h] [--force] <snapshot> [<snapshot> ...]",
"openstack volume snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--project <project>] [--project-domain <project-domain>] [--long] [--marker <volume-snapshot>] [--limit <num-snapshots>] [--name <name>] [--status <status>] [--volume <volume>]",
"openstack volume snapshot set [-h] [--name <name>] [--description <description>] [--no-property] [--property <key=value>] [--state <state>] <snapshot>",
"openstack volume snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <snapshot>",
"openstack volume snapshot unset [-h] [--property <key>] <snapshot>",
"openstack volume summary [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects]",
"openstack volume transfer request accept [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --auth-key <key> <transfer-request-id>",
"openstack volume transfer request create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--snapshots] [--no-snapshots] <volume>",
"openstack volume transfer request delete [-h] <transfer-request> [<transfer-request> ...]",
"openstack volume transfer request list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects]",
"openstack volume transfer request show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <transfer-request>",
"openstack volume type create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--public | --private] [--property <key=value>] [--project <project>] [--encryption-provider <provider>] [--encryption-cipher <cipher>] [--encryption-key-size <key-size>] [--encryption-control-location <control-location>] [--project-domain <project-domain>] <name>",
"openstack volume type delete [-h] <volume-type> [<volume-type> ...]",
"openstack volume type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--default | --public | --private] [--encryption-type]",
"openstack volume type set [-h] [--name <name>] [--description <description>] [--property <key=value>] [--project <project>] [--project-domain <project-domain>] [--encryption-provider <provider>] [--encryption-cipher <cipher>] [--encryption-key-size <key-size>] [--encryption-control-location <control-location>] <volume-type>",
"openstack volume type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--encryption-type] <volume-type>",
"openstack volume type unset [-h] [--property <key>] [--project <project>] [--project-domain <project-domain>] [--encryption-type] <volume-type>",
"openstack volume unset [-h] [--property <key>] [--image-property <key>] <volume>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/volume |
Chapter 11. Scaling the Ceph Storage cluster | Chapter 11. Scaling the Ceph Storage cluster 11.1. Scaling up the Ceph Storage cluster You can scale up the number of Ceph Storage nodes in your overcloud by re-running the deployment with the number of Ceph Storage nodes you need. Before doing so, ensure that you have enough nodes for the updated deployment. These nodes must be registered with the director and tagged accordingly. Registering New Ceph Storage Nodes To register new Ceph storage nodes with the director, follow these steps: Log in to the undercloud as the stack user and initialize your director configuration: Define the hardware and power management details for the new nodes in a new node definition template; for example, instackenv-scale.json . Import this file to the OpenStack director: Importing the node definition template registers each node defined there to the director. Assign the kernel and ramdisk images to all nodes: Note For more information about registering new nodes, see Section 2.2, "Registering nodes" . Manually Tagging New Nodes After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles. To inspect and tag new nodes, complete the following steps: Trigger hardware introspection to retrieve the hardware attributes of each node: The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The --provide option resets all nodes to an active state after introspection. Important Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes. Retrieve a list of your nodes to identify their UUIDs: Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile. Note As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data. For example, the following commands tag three additional nodes with the ceph-storage profile: Tip If the nodes you just tagged and registered use multiple disks, you can set the director to use a specific root disk on each node. See Section 2.4, "Defining the root disk for multi-disk clusters" for instructions on how to do so. Re-deploying the Overcloud with Additional Ceph Storage Nodes After registering and tagging the new nodes, you can now scale up the number of Ceph Storage nodes by re-deploying the overcloud. When you do, set the CephStorageCount parameter in the parameter_defaults of your environment file (in this case, ~/templates/storage-config.yaml ). In Section 7.1, "Assigning nodes and flavors to roles" , the overcloud is configured to deploy with 3 Ceph Storage nodes. To scale it up to 6 nodes instead, use: Upon re-deployment with this setting, the overcloud should now have 6 Ceph Storage nodes instead of 3. 11.2. Scaling down and replacing Ceph Storage nodes In some cases, you might need to scale down your Ceph cluster, or even replace a Ceph Storage node, for example, if a Ceph Storage node is faulty. In either situation, you must disable and rebalance any Ceph Storage node that you want to remove from the overcloud to avoid data loss. Note This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Starting, stopping, and restarting Ceph daemons that run in containers and Removing a Ceph OSD using the command-line interface . Procedure Log in to a Controller node as the heat-admin user. The director's stack user has an SSH key to access the heat-admin user. List the OSD tree and find the OSDs for your node. For example, the node you want to remove might contain the following OSDs: Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1. The Ceph Storage cluster begins rebalancing. Wait for this process to complete. Follow the status by using the following command: After the Ceph cluster completes rebalancing, log in to the Ceph Storage node you are removing, in this case overcloud-cephstorage-0 , as the heat-admin user and stop the node. Stop the OSDs. While logged in to the Controller node, remove the OSDs from the CRUSH map so that they no longer receive data. Remove the OSD authentication key. Remove the OSD from the cluster. Leave the node and return to the undercloud as the stack user. Disable the Ceph Storage node so that director does not reprovision it. Removing a Ceph Storage node requires an update to the overcloud stack in director with the local template files. First identify the UUID of the overcloud stack: Identify the UUIDs of the Ceph Storage node you want to delete: Delete the node from the stack and update the plan accordingly: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the overcloud environment in the Director Installation and Usage guide. Wait until the stack completes its update. Use the heat stack-list --show-nested command to monitor the stack update. Add new nodes to the director node pool and deploy them as Ceph Storage nodes. Use the CephStorageCount parameter in the parameter_defaults of your environment file, in this case, ~/templates/storage-config.yaml , to define the total number of Ceph Storage nodes in the overcloud. Note For more information about how to define the number of nodes per role, see Section 7.1, "Assigning nodes and flavors to roles" . After you update your environment file, redeploy the overcloud: Director provisions the new node and updates the entire stack with the details of the new node. Log in to a Controller node as the heat-admin user and check the status of the Ceph Storage node: Confirm that the value in the osdmap section matches the number of nodes in your cluster that you want. The Ceph Storage node that you removed is replaced with a new node. 11.3. Adding an OSD to a Ceph Storage node This procedure demonstrates how to add an OSD to a node. For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide . Procedure Notice the following heat template deploys Ceph Storage with three OSD devices: To add an OSD, update the node disk layout as described in Section 5.3, "Mapping the Ceph Storage node disk layout" . In this example, add /dev/sde to the template: Run openstack overcloud deploy to update the overcloud. Note This example assumes that all hosts with OSDs have a new device called /dev/sde . If you do not want all nodes to have the new device, update the heat template. For for information about how to define hosts with a differing devices list, see Section 5.5, "Mapping the disk layout to non-homogeneous Ceph Storage nodes" . 11.4. Removing an OSD from a Ceph Storage node This procedure demonstrates how to remove an OSD from a node. It assumes the following about the environment: A server ( ceph-storage0 ) has an OSD ( ceph-osd@4 ) running on /dev/sde . The Ceph monitor service ( ceph-mon ) is running on controller0 . There are enough available OSDs to ensure the storage cluster is not at its near-full ratio. For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide . Procedure SSH into ceph-storage0 and log in as root . Disable and stop the OSD service: Disconnect from ceph-storage0 . SSH into controller0 and log in as root . Identify the name of the Ceph monitor container: Enable the Ceph monitor container to mark the undesired OSD as out : Note This command causes Ceph to rebalance the storage cluster and copy data to other OSDs in the cluster. The cluster temporarily leaves the active+clean state until rebalancing is complete. Run the following command and wait for the storage cluster state to become active+clean : Remove the OSD from the CRUSH map so that it no longer receives data: Remove the OSD authentication key: Remove the OSD: Disconnect from controller0 . SSH into the undercloud as the stack user and locate the heat environment file in which you defined the CephAnsibleDisksConfig parameter. Notice the heat template contains four OSDs: Modify the template to remove /dev/sde . Run openstack overcloud deploy to update the overcloud. Note This example assumes that you removed the /dev/sde device from all hosts with OSDs. If you do not remove the same device from all nodes, update the heat template. For for information about how to define hosts with a differing devices list, see Section 5.5, "Mapping the disk layout to non-homogeneous Ceph Storage nodes" . | [
"source ~/stackrc",
"openstack overcloud node import ~/instackenv-scale.json",
"openstack overcloud node configure",
"openstack overcloud node introspect --all-manageable --provide",
"openstack baremetal node list",
"openstack baremetal node set 551d81f5-4df2-4e0f-93da-6c5de0b868f7 --property capabilities=\"profile:ceph-storage,boot_option:local\" openstack baremetal node set 5e735154-bd6b-42dd-9cc2-b6195c4196d7 --property capabilities=\"profile:ceph-storage,boot_option:local\" openstack baremetal node set 1a2b090c-299d-4c20-a25d-57dd21a7085b --property capabilities=\"profile:ceph-storage,boot_option:local\"",
"parameter_defaults: ControllerCount: 3 OvercloudControlFlavor: control ComputeCount: 3 OvercloudComputeFlavor: compute CephStorageCount: 6 OvercloudCephStorageFlavor: ceph-storage CephMonCount: 3 OvercloudCephMonFlavor: ceph-mon",
"-2 0.09998 host overcloud-cephstorage-0 0 0.04999 osd.0 up 1.00000 1.00000 1 0.04999 osd.1 up 1.00000 1.00000",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 0 [heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 1",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph -w",
"[heat-admin@overcloud-cephstorage-0 ~]USD sudo systemctl disable ceph-osd@0 [heat-admin@overcloud-cephstorage-0 ~]USD sudo systemctl disable ceph-osd@1",
"[heat-admin@overcloud-cephstorage-0 ~]USD sudo systemctl stop ceph-osd@0 [heat-admin@overcloud-cephstorage-0 ~]USD sudo systemctl stop ceph-osd@1",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.0 [heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.1",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.0 [heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.1",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 0 [heat-admin@overcloud-controller-0 ~]USD sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 1",
"[heat-admin@overcloud-controller-0 ~]USD exit [stack@director ~]USD",
"[stack@director ~]USD openstack baremetal node list [stack@director ~]USD openstack baremetal node maintenance set UUID",
"openstack stack list",
"openstack server list",
"openstack overcloud node delete --stack overcloud <NODE_UUID>",
"parameter_defaults: ControllerCount: 3 OvercloudControlFlavor: control ComputeCount: 3 OvercloudComputeFlavor: compute CephStorageCount: 3 OvercloudCephStorageFlavor: ceph-storage CephMonCount: 3 OvercloudCephMonFlavor: ceph-mon",
"openstack overcloud deploy --templates -e <ENVIRONMENT_FILE>",
"[heat-admin@overcloud-controller-0 ~]USD sudo ceph status",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde osd_scenario: lvm osd_objectstore: bluestore",
"systemctl disable ceph-osd@4 systemctl stop ceph-osd@4",
"podman ps | grep ceph-mon ceph-mon-controller0",
"podman exec ceph-mon-controller0 ceph osd out 4",
"podman exec ceph-mon-controller0 ceph -w",
"podman exec ceph-mon-controller0 ceph osd crush remove osd.4",
"podman exec ceph-mon-controller0 ceph auth del osd.4",
"podman exec ceph-mon-controller0 ceph osd rm 4",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/scaling_the_ceph_storage_cluster |
Chapter 10. Deprecated functionality | Chapter 10. Deprecated functionality Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 9. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 10.1. Installer and image creation Deprecated Kickstart commands The following Kickstart commands have been deprecated: timezone --ntpservers timezone --nontp logging --level %packages --excludeWeakdeps %packages --instLangs %anaconda pwpolicy Note that where only specific options are listed, the base command and its other options are still available and not deprecated. Using the deprecated commands in Kickstart files prints a warning in the logs. You can turn the deprecated command warnings into errors with the inst.ksstrict boot option. Bugzilla:1899167 [1] User and Group customizations in the edge-commit and edge-container blueprints have been deprecated Specifying a user or group customization in the blueprints is deprecated for the edge-commit and edge-container image types, because the user customization disappears when you upgrade the image and do not specify the user in the blueprint again. Note that specifying a user or group customization in blueprints that are used to deploy an existing OSTree commit, such as edge-raw-image , edge-installer , and edge-simplified-installer image types remains supported. Bugzilla:2173928 The initial-setup package now has been deprecated The initial-setup package has been deprecated in Red Hat Enterprise Linux 9.3 and will be removed in the major RHEL release. As a replacement, use gnome-initial-setup for the graphical user interface. Jira:RHELDOCS-16393 [1] The provider_hostip and provider_fedora_geoip values of the inst.geoloc boot option are deprecated The provider_hostip and provider_fedora_geoip values that specified the GeoIP API for the inst.geoloc= boot option are deprecated. As a replacement, you can use the geolocation_provider=URL option to set the required geolocation in the installation program configuration file. You can still use the inst.geoloc=0 option to disable the geolocation. Bugzilla:2127473 10.2. Security SHA-1 is deprecated for cryptographic purposes The usage of the SHA-1 message digest for cryptographic purposes has been deprecated in RHEL 9. The digest produced by SHA-1 is not considered secure because of many documented successful attacks based on finding hash collisions. The RHEL core crypto components no longer create signatures using SHA-1 by default. Applications in RHEL 9 have been updated to avoid using SHA-1 in security-relevant use cases. Among the exceptions, the HMAC-SHA1 message authentication code and the Universal Unique Identifier (UUID) values can still be created using SHA-1 because these use cases do not currently pose security risks. SHA-1 also can be used in limited cases connected with important interoperability and compatibility concerns, such as Kerberos and WPA-2. See the List of RHEL applications using cryptography that is not compliant with FIPS 140-3 section in the RHEL 9 Security hardening document for more details. If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by entering the following command: Alternatively, you can switch the system-wide crypto policies to the LEGACY policy. Note that LEGACY also enables many other algorithms that are not secure. Jira:RHELPLAN-110763 [1] fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. Bugzilla:2054740 SCP is deprecated in RHEL 9 The secure copy protocol (SCP) is deprecated because it has known security vulnerabilities. The SCP API remains available for the RHEL 9 lifecycle but using it reduces system security. In the scp utility, SCP is replaced by the SSH File Transfer Protocol (SFTP) by default. The OpenSSH suite does not use SCP in RHEL 9. SCP is deprecated in the libssh library. Jira:RHELPLAN-99136 [1] OpenSSL requires padding for RSA encryption in FIPS mode OpenSSL no longer supports RSA encryption without padding in FIPS mode. RSA encryption without padding is uncommon and is rarely used. Note that key encapsulation with RSA (RSASVE) does not use padding but is still supported. Bugzilla:2168665 NTLM and Krb4 are deprecated in Cyrus SASL The NTLM and Kerberos 4 authentication protocols have been deprecated and might be removed in a future major version of RHEL. These protocols are no longer considered secure and have already been removed from upstream implementations. Jira:RHELDOCS-17380 [1] Digest-MD5 in SASL is deprecated The Digest-MD5 authentication mechanism in the Simple Authentication Security Layer (SASL) framework is deprecated, and it might be removed from the cyrus-sasl packages in a future major release. Bugzilla:1995600 [1] OpenSSL deprecates MD2, MD4, MDC2, Whirlpool, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1 The OpenSSL project has deprecated a set of cryptographic algorithms because they are insecure, uncommonly used, or both. Red Hat also discourages the use of those algorithms, and RHEL 9 provides them for migrating encrypted data to use new algorithms. Users must not depend on those algorithms for the security of their systems. The implementations of the following algorithms have been moved to the legacy provider in OpenSSL: MD2, MD4, MDC2, Whirlpool, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1. See the /etc/pki/tls/openssl.cnf configuration file for instructions on how to load the legacy provider and enable support for the deprecated algorithms. Bugzilla:1975836 /etc/system-fips is now deprecated Support for indicating FIPS mode through the /etc/system-fips file has been removed, and the file will not be included in future versions of RHEL. To install RHEL in FIPS mode, add the fips=1 parameter to the kernel command line during the system installation. You can check whether RHEL operates in FIPS mode by using the fips-mode-setup --check command. Jira:RHELPLAN-103232 [1] libcrypt.so.1 is now deprecated The libcrypt.so.1 library is now deprecated, and it might be removed in a future version of RHEL. Bugzilla:2034569 10.3. Subscription management The --token option of the subscription-manager command is deprecated The --token=<TOKEN> option of the subscription-manager register command is an authentication method that helps register your system to Red Hat. This option depends on capabilities offered by the entitlement server. The default entitlement server, subscription.rhsm.redhat.com , is planning to turn off this capability. As a consequence, attempting to use subscription-manager register --token=<TOKEN> might fail with the following error message: You can continue registering your system using other authorization methods, such as including paired options --username / --password and --org / --activationkey of the subscription-manager register command. Bugzilla:2163716 10.4. Shells and command-line tools Setting the TMPDIR variable in the ReaR configuration file is deprecated Setting the TMPDIR environment variable in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file), by using a statement such as export TMPDIR=... , does not work and is deprecated. To specify a custom directory for ReaR temporary files, export the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=... statement and then execute the rear command in the same shell session or script. Jira:RHELDOCS-18049 The dump utility from the dump package has been deprecated The dump utility used for backup of file systems has been deprecated and will not be available in RHEL 9. In RHEL 9, Red Hat recommends using the tar , dd , or bacula , backup utility, based on type of usage, which provides full and safe backups on ext2, ext3, and ext4 file systems. Note that the restore utility from the dump package remains available and supported in RHEL 9 and is available as the restore package. Bugzilla:1997366 [1] The SQLite database backend in Bacula has been deprecated The Bacula backup system supported multiple database backends: PostgreSQL, MySQL, and SQLite. The SQLite backend has been deprecated and will become unsupported in a later release of RHEL. As a replacement, migrate to one of the other backends (PostgreSQL or MySQL) and do not use the SQLite backend in new deployments. Jira:RHEL-6856 10.5. Networking Network teams are deprecated in RHEL 9 The teamd service and the libteam library are deprecated in Red Hat Enterprise Linux 9 and will be removed in the major release. As a replacement, configure a bond instead of a network team. Red Hat focuses its efforts on kernel-based bonding to avoid maintaining two features, bonds and teams, that have similar functions. The bonding code has a high customer adoption, is robust, and has an active community development. As a result, the bonding code receives enhancements and updates. For details about how to migrate a team to a bond, see Migrating a network team configuration to network bond . Bugzilla:1935544 [1] NetworkManager connection profiles in ifcfg format are deprecated In RHEL 9.0 and later, connection profiles in ifcfg format are deprecated. The major RHEL release will remove the support for this format. However, in RHEL 9, NetworkManager still processes and updates existing profiles in this format if you modify them. By default, NetworkManager now stores connection profiles in keyfile format in the /etc/NetworkManager/system-connections/ directory. Unlike the ifcfg format, the keyfile format supports all connection settings that NetworkManager provides. For further details about the keyfile format and how to migrate profiles, see NetworkManager connection profiles in keyfile format . Bugzilla:1894877 [1] The iptables back end in firewalld is deprecated In RHEL 9, the iptables framework is deprecated. As a consequence, the iptables backend and the direct interface in firewalld are also deprecated. Instead of the direct interface you can use the native features in firewalld to configure the required rules. Bugzilla:2089200 The PF_KEYv2 kernel API is deprecated Applications can configure the kernel's IPsec implementation by using the PV_KEYv2 and the newer netlink API. PV_KEYv2 is not actively maintained upstream and misses important security features, such as modern ciphers, offload, and extended sequence number support. As a result, starting with RHEL 9.3, the PV_KEYv2 API is deprecated and will be removed in the major RHEL release. If you use this kernel API in your application, migrate it to use the modern netlink API as an alternative. Jira:RHEL-1015 [1] 10.6. Kernel ATM encapsulation is deprecated in RHEL 9 Asynchronous Transfer Mode (ATM) encapsulation enables Layer-2 (Point-to-Point Protocol, Ethernet) or Layer-3 (IP) connectivity for the ATM Adaptation Layer 5 (AAL-5). Red Hat has not been providing support for ATM NIC drivers since RHEL 7. The support for ATM implementation is being dropped in RHEL 9. These protocols are currently used only in chipsets, which support the ADSL technology and are being phased out by manufacturers. Therefore, ATM encapsulation is deprecated in Red Hat Enterprise Linux 9. For more information, see PPP Over AAL5 , Multiprotocol Encapsulation over ATM Adaptation Layer 5 , and Classical IP and ARP over ATM . Bugzilla:2058153 The kexec_load system call for kexec-tools has been deprecated The kexec_load system call, which loads the second kernel, will not be supported in future RHEL releases. The kexec_file_load system call replaces kexec_load and is now the default system call on all architectures. For more information, see Is kexec_load supported in RHEL9? . Bugzilla:2113873 [1] Network teams are deprecated in RHEL 9 The teamd service and the libteam library are deprecated in Red Hat Enterprise Linux 9 and will be removed in the major release. As a replacement, configure a bond instead of a network team. Red Hat focuses its efforts on kernel-based bonding to avoid maintaining two features, bonds and teams, that have similar functions. The bonding code has a high customer adoption, is robust, and has an active community development. As a result, the bonding code receives enhancements and updates. For details about how to migrate a team to a bond, see Migrating a network team configuration to network bond . Bugzilla:2013884 [1] 10.7. File systems and storage lvm2-activation-generator and its generated services removed in RHEL 9.0 The lvm2-activation-generator program and its generated services lvm2-activation , lvm2-activation-early , and lvm2-activation-net are removed in RHEL 9.0. The lvm.conf event_activation setting, used to activate the services, is no longer functional. The only method for auto activating volume groups is event based activation. Bugzilla:2038183 Persistent Memory Development Kit ( pmdk ) and support library have been deprecated in RHEL 9 pmdk is a collection of libraries and tools for System Administrators and Application Developers to simplify managing and accessing persistent memory devices. pmdk and support library have been deprecated in RHEL 9. This also includes the -debuginfo packages. The following list of binary packages produced by pmdk , including the nvml source package have been deprecated: libpmem libpmem-devel libpmem-debug libpmem2 libpmem2-devel libpmem2-debug libpmemblk libpmemblk-devel libpmemblk-debug libpmemlog libpmemlog-devel libpmemlog-debug libpmemobj libpmemobj-devel libpmemobj-debug libpmempool libpmempool-devel libpmempool-debug pmempool daxio pmreorder pmdk-convert libpmemobj++ libpmemobj++-devel libpmemobj++-doc Jira:RHELDOCS-16432 [1] 10.8. Dynamic programming languages, web and database servers libdb has been deprecated RHEL 8 and RHEL 9 currently provide Berkeley DB ( libdb ) version 5.3.28, which is distributed under the LGPLv2 license. The upstream Berkeley DB version 6 is available under the AGPLv3 license, which is more restrictive. The libdb package is deprecated as of RHEL 9 and might not be available in future major RHEL releases. In addition, cryptographic algorithms have been removed from libdb in RHEL 9 and multiple libdb dependencies have been removed from RHEL 9. Users of libdb are advised to migrate to a different key-value database. For more information, see the Knowledgebase article Available replacements for the deprecated Berkeley DB (libdb) in RHEL . Bugzilla:1927780 [1] , Jira:RHELPLAN-80695, Bugzilla:1974657 10.9. Compilers and development tools Smaller size of keys than 2048 are deprecated by openssl 3.0 in Go's FIPS mode Key sizes smaller than 2048 bits are deprecated by openssl 3.0 and no longer work in Go's FIPS mode. Bugzilla:2111072 Some PKCS1 v1.5 modes are now deprecated in Go's FIPS mode Some PKCS1 v1.5 modes are not approved in FIPS-140-3 for encryption and are disabled. They will no longer work in Go's FIPS mode. Bugzilla:2092016 [1] 10.10. Identity Management SHA-1 in OpenDNSSec is now deprecated OpenDNSSec supports exporting Digital Signatures and authentication records using the SHA-1 algorithm. The use of the SHA-1 algorithm is no longer supported. With the RHEL 9 release, SHA-1 in OpenDNSSec is deprecated and it might be removed in a future minor release. Additionally, OpenDNSSec support is limited to its integration with Red Hat Identity Management. OpenDNSSec is not supported standalone. Bugzilla:1979521 The SSSD implicit files provider domain is disabled by default The SSSD implicit files provider domain, which retrieves user information from local files such as /etc/shadow and group information from /etc/groups , is now disabled by default. To retrieve user and group information from local files with SSSD: Configure SSSD. Choose one of the following options: Explicitly configure a local domain with the id_provider=files option in the sssd.conf configuration file. Enable the files provider by setting enable_files_domain=true in the sssd.conf configuration file. Configure the name services switch. Jira:RHELPLAN-100639 [1] The SSSD files provider has been deprecated The SSSD files provider has been deprecated in Red Hat Enterprise Linux (RHEL) 9. The files provider might be removed from a future release of RHEL. Jira:RHELPLAN-139805 [1] The nsslapd-ldapimaprootdn parameter is deprecated In Directory Server, the nsslapd-ldapimaprootdn configuration parameter is used to map a system root entry to a root DN entry. Usually, the nsslapd-ldapimaprootdn parameter has the same value as the nsslapd-rootdn parameter. In addition, changing one attribute but not changing the other leads to a non-functional autobind configuration that breaks dsconf utility and access to the web console. With this update, Directory Server uses only the nsslapd-rootdn parameter to map a system root entry to a root DN entry. As a result, the nsslapd-ldapimaprootdn parameter is deprecated and the root DN change does not break dsconf utility and access to the web console. Bugzilla:2170494 The nsslapd-conntablesize configuration parameter has been removed from 389-ds-base The nsslapd-conntablesize configuration parameter has been removed from the 389-ds-base package in RHEL 9.3. Previously, the nsslapd-conntablesize configuration attribute specified the size of the connection table that managed established connections. With the introduction of the multi-listener feature, which improves the management of established connections, Directory Server now calculates the size of the connection table dynamically. This also resolves issues, when the connection table size was set too low and it affected the number of connections the server was able to support. Starting with RHEL 9.3, use only nsslapd-maxdescriptors and nsslapd-reservedescriptors attributes to manage the number of TCP/IP connections Directory Server can support. Bugzilla:2098236 The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 [1] 10.11. Desktop GTK 2 is now deprecated The legacy GTK 2 toolkit and the following, related packages have been deprecated: adwaita-gtk2-theme gnome-common gtk2 gtk2-immodules hexchat Several other packages currently depend on GTK 2. These have been modified so that they no longer depend on the deprecated packages in a future major RHEL release. If you maintain an application that uses GTK 2, Red Hat recommends that you port the application to GTK 4. Jira:RHELPLAN-131882 [1] LibreOffice is deprecated The LibreOffice RPM packages are now deprecated and will be removed in a future major RHEL release. LibreOffice continues to be fully supported through the entire life cycle of RHEL 7, 8, and 9. As a replacement for the RPM packages, Red Hat recommends that you install LibreOffice from either of the following sources provided by The Document Foundation: The official Flatpak package in the Flathub repository: https://flathub.org/apps/org.libreoffice.LibreOffice . The official RPM packages: https://www.libreoffice.org/download/download-libreoffice/ . Jira:RHELDOCS-16300 [1] 10.12. Graphics infrastructures Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. Jira:RHELPLAN-98983 [1] 10.13. Red Hat Enterprise Linux system roles The network system role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the network RHEL system role on a RHEL 8 control node to configure a network team on RHEL 9 nodes, shows a warning about the deprecation. Bugzilla:1999770 10.14. Virtualization SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA2 algorithm, or later. Bugzilla:1935497 [1] Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor might become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. However, a new VM snapshot mechanism is under development and is planned to be fully implemented in a future minor release of RHEL 9. Jira:RHELDOCS-16948 [1] , Bugzilla:1621944 The virtual floppy driver has become deprecated The isa-fdc driver, which controls virtual floppy disk devices, is now deprecated, and will become unsupported in a future release of RHEL. Therefore, to ensure forward compatibility with migrated virtual machines (VMs), Red Hat discourages using floppy disk devices in VMs hosted on RHEL 9. Bugzilla:1965079 qcow2-v2 image format is deprecated With RHEL 9, the qcow2-v2 format for virtual disk images has become deprecated, and will become unsupported in a future major release of RHEL. In addition, the RHEL 9 Image Builder cannot create disk images in the qcow2-v2 format. Instead of qcow2-v2, Red Hat strongly recommends using qcow2-v3. To convert a qcow2-v2 image to a later format version, use the qemu-img amend command. Bugzilla:1951814 virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager might not be yet available in the RHEL web console. Jira:RHELPLAN-10304 [1] libvirtd has become deprecated The monolithic libvirt daemon, libvirtd , has been deprecated in RHEL 9, and will be removed in a future major release of RHEL. Note that you can still use libvirtd for managing virtualization on your hypervisor, but Red Hat recommends switching to the newly introduced modular libvirt daemons. For instructions and details, see the RHEL 9 Configuring and Managing Virtualization document. Jira:RHELPLAN-113995 [1] Legacy CPU models are now deprecated A significant number of CPU models have become deprecated and will become unsupported for use in virtual machines (VMs) in a future major release of RHEL. The deprecated models are as follows: For Intel: models before Intel Xeon 55xx and 75xx Processor families (also known as Nehalem) For AMD: models before AMD Opteron G4 For IBM Z: models before IBM z14 To check whether your VM is using a deprecated CPU model, use the virsh dominfo utility, and look for a line similar to the following in the Messages section: Bugzilla:2060839 RDMA-based live migration is deprecated With this update, migrating running virtual machines using Remote Direct Memory Access (RDMA) has become deprecated. As a result, it is still possible to use the rdma:// migration URI to request migration over RDMA, but this feature will become unsupported in a future major release of RHEL. Jira:RHELPLAN-153267 [1] The Intel vGPU feature has been removed Previously, as a Technology Preview, it was possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices could then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs shared the performance of a single physical Intel GPU, however only selected Intel GPUs were compatible with this feature. Since RHEL 9.3, the Intel vGPU feature has been removed entirely. Bugzilla:2206599 [1] 10.15. Containers Running RHEL 9 containers on a RHEL 7 host is not supported Running RHEL 9 containers on a RHEL 7 host is not supported. It might work, but it is not guaranteed. For more information, see Red Hat Enterprise Linux Container Compatibility Matrix . Jira:RHELPLAN-100087 [1] SHA1 hash algorithm within Podman has been deprecated The SHA1 algorithm used to generate the filename of the rootless network namespace is no longer supported in Podman. Therefore, rootless containers started before updating to Podman 4.1.1 or later have to be restarted if they are joined to a network (and not just using slirp4netns ) to ensure they can connect to containers started after the upgrade. Bugzilla:2069279 [1] rhel9/pause has been deprecated The rhel9/pause container image has been deprecated. Bugzilla:2106816 The CNI network stack has been deprecated The Container Network Interface (CNI) network stack is deprecated and will be removed from Podman in a future minor release of RHEL. Previously, containers connected to the single Container Network Interface (CNI) plugin only via DNS. Podman v.4.0 introduced a new Netavark network stack. You can use the Netavark network stack with Podman and other Open Container Initiative (OCI) container management applications. The Netavark network stack for Podman is also compatible with advanced Docker functionalities. Containers in multiple networks can access containers on any of those networks. For more information, see Switching the network stack from CNI to Netavark . Jira:RHELDOCS-16756 [1] The Inkscape and LibreOffice Flatpak images are deprecated The rhel9/inkscape-flatpak and rhel9/libreoffice-flatpak Flatpak images, which are available as Technology Previews, have been deprecated. Red Hat recommends the following alternatives to these images: To replace rhel9/inkscape-flatpak , use the inkscape RPM package. To replace rhel9/libreoffice-flatpak , see the LibreOffice deprecation release note . Jira:RHELDOCS-17102 [1] 10.16. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 8 and RHEL 9, see Changes to packages in the Considerations in adopting RHEL 9 document. Important The support status of deprecated packages remains unchanged within RHEL 9. For more information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . The following packages have been deprecated in RHEL 9: adwaita-gtk2-theme autocorr-af autocorr-bg autocorr-ca autocorr-cs autocorr-da autocorr-de autocorr-dsb autocorr-el autocorr-en autocorr-es autocorr-fa autocorr-fi autocorr-fr autocorr-ga autocorr-hr autocorr-hsb autocorr-hu autocorr-is autocorr-it autocorr-ja autocorr-ko autocorr-lb autocorr-lt autocorr-mn autocorr-nl autocorr-pl autocorr-pt autocorr-ro autocorr-ru autocorr-sk autocorr-sl autocorr-sr autocorr-sv autocorr-tr autocorr-vi autocorr-vro autocorr-zh cheese cheese-libs clutter clutter-gst3 clutter-gtk cogl daxio dbus-glib dbus-glib-devel enchant enchant-devel eog evolution evolution-bogofilter evolution-devel evolution-help evolution-langpacks evolution-mapi evolution-mapi-langpacks evolution-pst evolution-spamassassin festival festival-data festvox-slt-arctic-hts flite flite-devel gedit gedit-plugin-bookmarks gedit-plugin-bracketcompletion gedit-plugin-codecomment gedit-plugin-colorpicker gedit-plugin-colorschemer gedit-plugin-commander gedit-plugin-drawspaces gedit-plugin-findinfiles gedit-plugin-joinlines gedit-plugin-multiedit gedit-plugin-sessionsaver gedit-plugin-smartspaces gedit-plugin-synctex gedit-plugin-terminal gedit-plugin-textsize gedit-plugin-translate gedit-plugin-wordcompletion gedit-plugins gedit-plugins-data gnome-common gnome-photos gnome-photos-tests gnome-screenshot gnome-themes-extra gtk2 gtk2-devel gtk2-devel-docs gtk2-immodule-xim gtk2-immodules highcontrast-icon-theme inkscape inkscape-docs inkscape-view iptables-devel iptables-libs iptables-nft iptables-nft-services iptables-utils libdb libgdata libgdata-devel libpmem libpmem-debug libpmem-devel libpmem2 libpmem2-debug libpmem2-devel libpmemblk libpmemblk-debug libpmemblk-devel libpmemlog libpmemlog-debug libpmemlog-devel libpmemobj libpmemobj-debug libpmemobj-devel libpmempool libpmempool-debug libpmempool-devel libreoffice libreoffice-base libreoffice-calc libreoffice-core libreoffice-data libreoffice-draw libreoffice-emailmerge libreoffice-filters libreoffice-gdb-debug-support libreoffice-graphicfilter libreoffice-gtk3 libreoffice-help-ar libreoffice-help-bg libreoffice-help-bn libreoffice-help-ca libreoffice-help-cs libreoffice-help-da libreoffice-help-de libreoffice-help-dz libreoffice-help-el libreoffice-help-en libreoffice-help-eo libreoffice-help-es libreoffice-help-et libreoffice-help-eu libreoffice-help-fi libreoffice-help-fr libreoffice-help-gl libreoffice-help-gu libreoffice-help-he libreoffice-help-hi libreoffice-help-hr libreoffice-help-hu libreoffice-help-id libreoffice-help-it libreoffice-help-ja libreoffice-help-ko libreoffice-help-lt libreoffice-help-lv libreoffice-help-nb libreoffice-help-nl libreoffice-help-nn libreoffice-help-pl libreoffice-help-pt-BR libreoffice-help-pt-PT libreoffice-help-ro libreoffice-help-ru libreoffice-help-si libreoffice-help-sk libreoffice-help-sl libreoffice-help-sv libreoffice-help-ta libreoffice-help-tr libreoffice-help-uk libreoffice-help-zh-Hans libreoffice-help-zh-Hant libreoffice-impress libreoffice-langpack-af libreoffice-langpack-ar libreoffice-langpack-as libreoffice-langpack-bg libreoffice-langpack-bn libreoffice-langpack-br libreoffice-langpack-ca libreoffice-langpack-cs libreoffice-langpack-cy libreoffice-langpack-da libreoffice-langpack-de libreoffice-langpack-dz libreoffice-langpack-el libreoffice-langpack-en libreoffice-langpack-eo libreoffice-langpack-es libreoffice-langpack-et libreoffice-langpack-eu libreoffice-langpack-fa libreoffice-langpack-fi libreoffice-langpack-fr libreoffice-langpack-fy libreoffice-langpack-ga libreoffice-langpack-gl libreoffice-langpack-gu libreoffice-langpack-he libreoffice-langpack-hi libreoffice-langpack-hr libreoffice-langpack-hu libreoffice-langpack-id libreoffice-langpack-it libreoffice-langpack-ja libreoffice-langpack-kk libreoffice-langpack-kn libreoffice-langpack-ko libreoffice-langpack-lt libreoffice-langpack-lv libreoffice-langpack-mai libreoffice-langpack-ml libreoffice-langpack-mr libreoffice-langpack-nb libreoffice-langpack-nl libreoffice-langpack-nn libreoffice-langpack-nr libreoffice-langpack-nso libreoffice-langpack-or libreoffice-langpack-pa libreoffice-langpack-pl libreoffice-langpack-pt-BR libreoffice-langpack-pt-PT libreoffice-langpack-ro libreoffice-langpack-ru libreoffice-langpack-si libreoffice-langpack-sk libreoffice-langpack-sl libreoffice-langpack-sr libreoffice-langpack-ss libreoffice-langpack-st libreoffice-langpack-sv libreoffice-langpack-ta libreoffice-langpack-te libreoffice-langpack-th libreoffice-langpack-tn libreoffice-langpack-tr libreoffice-langpack-ts libreoffice-langpack-uk libreoffice-langpack-ve libreoffice-langpack-xh libreoffice-langpack-zh-Hans libreoffice-langpack-zh-Hant libreoffice-langpack-zu libreoffice-math libreoffice-ogltrans libreoffice-opensymbol-fonts libreoffice-pdfimport libreoffice-pyuno libreoffice-sdk libreoffice-sdk-doc libreoffice-ure libreoffice-ure-common libreoffice-wiki-publisher libreoffice-writer libreoffice-x11 libreoffice-xsltfilter libreofficekit libsoup libsoup-devel libuser libuser-devel libwpe libwpe-devel mcpp mod_auth_mellon motif motif-devel pmdk-convert pmempool python3-pytz qt5 qt5-assistant qt5-designer qt5-devel qt5-doctools qt5-linguist qt5-qdbusviewer qt5-qt3d qt5-qt3d-devel qt5-qt3d-doc qt5-qt3d-examples qt5-qtbase qt5-qtbase-common qt5-qtbase-devel qt5-qtbase-doc qt5-qtbase-examples qt5-qtbase-gui qt5-qtbase-mysql qt5-qtbase-odbc qt5-qtbase-postgresql qt5-qtbase-private-devel qt5-qtbase-static qt5-qtconnectivity qt5-qtconnectivity-devel qt5-qtconnectivity-doc qt5-qtconnectivity-examples qt5-qtdeclarative qt5-qtdeclarative-devel qt5-qtdeclarative-doc qt5-qtdeclarative-examples qt5-qtdeclarative-static qt5-qtdoc qt5-qtgraphicaleffects qt5-qtgraphicaleffects-doc qt5-qtimageformats qt5-qtimageformats-doc qt5-qtlocation qt5-qtlocation-devel qt5-qtlocation-doc qt5-qtlocation-examples qt5-qtmultimedia qt5-qtmultimedia-devel qt5-qtmultimedia-doc qt5-qtmultimedia-examples qt5-qtquickcontrols qt5-qtquickcontrols-doc qt5-qtquickcontrols-examples qt5-qtquickcontrols2 qt5-qtquickcontrols2-devel qt5-qtquickcontrols2-doc qt5-qtquickcontrols2-examples qt5-qtscript qt5-qtscript-devel qt5-qtscript-doc qt5-qtscript-examples qt5-qtsensors qt5-qtsensors-devel qt5-qtsensors-doc qt5-qtsensors-examples qt5-qtserialbus qt5-qtserialbus-devel qt5-qtserialbus-doc qt5-qtserialbus-examples qt5-qtserialport qt5-qtserialport-devel qt5-qtserialport-doc qt5-qtserialport-examples qt5-qtsvg qt5-qtsvg-devel qt5-qtsvg-doc qt5-qtsvg-examples qt5-qttools qt5-qttools-common qt5-qttools-devel qt5-qttools-doc qt5-qttools-examples qt5-qttools-libs-designer qt5-qttools-libs-designercomponents qt5-qttools-libs-help qt5-qttools-static qt5-qttranslations qt5-qtwayland qt5-qtwayland-devel qt5-qtwayland-doc qt5-qtwayland-examples qt5-qtwebchannel qt5-qtwebchannel-devel qt5-qtwebchannel-doc qt5-qtwebchannel-examples qt5-qtwebsockets qt5-qtwebsockets-devel qt5-qtwebsockets-doc qt5-qtwebsockets-examples qt5-qtx11extras qt5-qtx11extras-devel qt5-qtx11extras-doc qt5-qtxmlpatterns qt5-qtxmlpatterns-devel qt5-qtxmlpatterns-doc qt5-qtxmlpatterns-examples qt5-rpm-macros qt5-srpm-macros webkit2gtk3 webkit2gtk3-devel webkit2gtk3-jsc webkit2gtk3-jsc-devel wpebackend-fdo wpebackend-fdo-devel xorg-x11-server-Xorg | [
"update-crypto-policies --set DEFAULT:SHA1",
"Token authentication not supported by the entitlement server",
"[domain/local] id_provider=files",
"[sssd] enable_files_domain = true",
"authselect enable-feature with-files-provider",
"tainted: use of deprecated configuration settings deprecated configuration: CPU model 'i486'"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/deprecated-functionality |
Chapter 33. Managing user passwords in IdM | Chapter 33. Managing user passwords in IdM 33.1. Who can change IdM user passwords and how Regular users without the permission to change other users' passwords can change only their own personal password. The new password must meet the IdM password policies applicable to the groups of which the user is a member. For details on configuring password policies, see Defining IdM password policies . Administrators and users with password change rights can set initial passwords for new users and reset passwords for existing users. These passwords: Do not have to meet the IdM password policies. Expire after the first successful login. When this happens, IdM prompts the user to change the expired password immediately. To disable this behavior, see Enabling password reset in IdM without prompting the user for a password change at the login . Note The LDAP Directory Manager (DM) user can change user passwords using LDAP tools. The new password can override any IdM password policies. Passwords set by DM do not expire after the first login. 33.2. Changing your user password in the IdM Web UI As an Identity Management (IdM) user, you can change your user password in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. Procedure In the upper right corner, click User name Change password . Figure 33.1. Resetting Password Enter the current and new passwords. 33.3. Resetting another user's password in the IdM Web UI As an administrative user of Identity Management (IdM), you can change passwords for other users in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI as an administrative user. Procedure Select Identity Users . Click the name of the user to edit. Click Actions Reset password . Figure 33.2. Resetting Password Enter the new password, and click Reset Password . Figure 33.3. Confirming New Password 33.4. Resetting the Directory Manager user password If you lose the Identity Management (IdM) Directory Manager password, you can reset it. Prerequisites You have root access to an IdM server. Procedure Generate a new password hash by using the pwdhash command. For example: By specifying the path to the Directory Server configuration, you automatically use the password storage scheme set in the nsslapd-rootpwstoragescheme attribute to encrypt the new password. On every IdM server in your topology, execute the following steps: Stop all IdM services installed on the server: Edit the /etc/dirsrv/IDM-EXAMPLE-COM/dse.ldif file and set the nsslapd-rootpw attribute to the value generated by the pwdhash command: Start all IdM services installed on the server: 33.5. Changing your user password or resetting another user's password in IdM CLI You can change your user password using the Identity Management (IdM) command-line interface (CLI). If you are an administrative user, you can use the CLI to reset another user's password. Prerequisites You have obtained a ticket-granting ticket (TGT) for an IdM user. If you are resetting another user's password, you must have obtained a TGT for an administrative user in IdM. Procedure Enter the ipa user-mod command with the name of the user and the --password option. The command will prompt you for the new password. Note You can also use the ipa passwd idm_user command instead of ipa user-mod . 33.6. Enabling password reset in IdM without prompting the user for a password change at the login By default, when an administrator resets another user's password, the password expires after the first successful login. As IdM Directory Manager, you can specify the following privileges for individual IdM administrators: They can perform password change operations without requiring users to change their passwords subsequently on their first login. They can bypass the password policy so that no strength or history enforcement is applied. Warning Bypassing the password policy can be a security threat. Exercise caution when selecting users to whom you grant these additional privileges. Prerequisites You know the Directory Manager password. Procedure On every Identity Management (IdM) server in the domain, make the following changes: Enter the ldapmodify command to modify LDAP entries. Specify the name of the IdM server and the 389 port and press Enter: Enter the Directory Manager password. Enter the distinguished name for the ipa_pwd_extop password synchronization entry and press Enter: Specify the modify type of change and press Enter: Specify what type of modification you want LDAP to execute and to which attribute. Press Enter: Specify the administrative user accounts in the passSyncManagersDNs attribute. The attribute is multi-valued. For example, to grant the admin user the password resetting powers of Directory Manager: Press Enter twice to stop editing the entry. The whole procedure looks as follows: The admin user, listed under passSyncManagerDNs , now has the additional privileges. 33.7. Checking if an IdM user's account is locked As an Identity Management (IdM) administrator, you can check if an IdM user's account is locked. For that, you must compare a user's maximum allowed number of failed login attempts with the number of the user's actual failed logins. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. Procedure Display the status of the user account to see the number of failed logins: Display the number of allowed login attempts for a particular user: Log in to the IdM Web UI as IdM administrator. Open the Identity Users Active users tab. Click the user name to open the user settings. In the Password policy section, locate the Max failures item. Compare the number of failed logins as displayed in the output of the ipa user-status command with the Max failures number displayed in the IdM Web UI. If the number of failed logins equals that of maximum allowed login attempts, the user account is locked. Additional resources Unlocking user accounts after password failures in IdM 33.8. Unlocking user accounts after password failures in IdM If a user attempts to log in using an incorrect password a certain number of times, Identity Management (IdM) locks the user account, which prevents the user from logging in. For security reasons, IdM does not display any warning message that the user account has been locked. Instead, the CLI prompt might continue asking the user for a password again and again. IdM automatically unlocks the user account after a specified amount of time has passed. Alternatively, you can unlock the user account manually with the following procedure. Prerequisites You have obtained the ticket-granting ticket of an IdM administrative user. Procedure To unlock a user account, use the ipa user-unlock command. After this, the user can log in again. Additional resources Checking if an IdM user's account is locked 33.9. Enabling the tracking of last successful Kerberos authentication for users in IdM For performance reasons, Identity Management (IdM) running in Red Hat Enterprise Linux 8 does not store the time stamp of the last successful Kerberos authentication of a user. As a consequence, certain commands, such as ipa user-status , do not display the time stamp. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. You have root access to the IdM server on which you are executing the procedure. Procedure Display the currently enabled password plug-in features: The output shows that the KDC:Disable Last Success plug-in is enabled. The plug-in hides the last successful Kerberos authentication attempt from being visible in the ipa user-status output. Add the --ipaconfigstring= feature parameter for every feature to the ipa config-mod command that is currently enabled, except for KDC:Disable Last Success : This command enables only the AllowNThash plug-in. To enable multiple features, specify the --ipaconfigstring= feature parameter separately for each feature. Restart IdM: | [
"pwdhash -D /etc/dirsrv/slapd-IDM-EXAMPLE-COM password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl stop",
"nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl start",
"ipa user-mod idm_user --password Password: Enter Password again to verify: -------------------- Modified user \"idm_user\" --------------------",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password:",
"dn: cn=ipa_pwd_extop,cn=plugins,cn=config",
"changetype: modify",
"add: passSyncManagersDNs",
"passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password: dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ipa user-status example_user ----------------------- Account disabled: False ----------------------- Server: idm.example.com Failed logins: 8 Last successful authentication: N/A Last failed authentication: 20220229080317Z Time now: 2022-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------",
"ipa user-unlock idm_user ----------------------- Unlocked account \"idm_user\" -----------------------",
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipactl restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-user-passwords-in-idm_configuring-and-managing-idm |
Chapter 4. Installing and configuring the logs service | Chapter 4. Installing and configuring the logs service Red Hat OpenStack Platform (RHOSP) writes informational messages to specific log files; you can use these messages for troubleshooting and monitoring system events. The log collection agent Rsyslog collects logs on the client side and sends these logs to an instance of Rsyslog that is running on the server side. The server-side Rsyslog instance redirects log records to Elasticsearch for storage. Note You do not need to attach the individual log files to your support cases manually. The sosreport utility gathers the required logs automatically. 4.1. The centralized log system architecture and components Monitoring tools use a client-server model with the client deployed onto the Red Hat OpenStack Platform (RHOSP) overcloud nodes. The Rsyslog service provides client-side centralized logging (CL). All RHOSP services generate and update log files. These log files record actions, errors, warnings, and other events. In a distributed environment like OpenStack, collecting these logs in a central location simplifies debugging and administration. With centralized logging, there is one central place to view logs across your entire RHOSP environment. These logs come from the operating system, such as syslog and audit log files, infrastructure components, such as RabbitMQ and MariaDB, and OpenStack services such as Identity, Compute, and others. The centralized logging toolchain consists of the following components: Log Collection Agent (Rsyslog) Data Store (ElasticSearch) API/Presentation Layer (Grafana) Note Red Hat OpenStack Platform director does not deploy the server-side components for centralized logging. Red Hat does not support the server-side components, including the Elasticsearch database and Grafana. 4.2. Enabling centralized logging with Elasticsearch To enable centralized logging, you must specify the implementation of the OS::TripleO::Services::Rsyslog composable service. Note The Rsyslog service uses only Elasticsearch as a data store for centralized logging. Prerequisites Elasticsearch is installed on the server side. Procedure Add the file path of the logging environment file to the overcloud deployment command with any other environment files that are relevant to your environment and deploy, as shown in the following example: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 4.3. Configuring logging features To configure logging features, modify the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file. Procedure Copy the tripleo-heat-templates/environments/logging-environment-rsyslog.yaml file to your home directory. Create entries in the RsyslogElasticsearchSetting parameter to suit your environment. The following snippet is an example configuration of the RsyslogElasticsearchSetting parameter: Additional resources For more information about the configurable parameters, see Section 4.3.1, "Configurable logging parameters" . 4.3.1. Configurable logging parameters This table contains descriptions of logging parameters that you use to configure logging features in Red Hat OpenStack Platform (RHOSP). You can find these parameters in the tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml file. Table 4.1. Configurable logging parameters Parameter Description RsyslogElasticsearchSetting Configuration for rsyslog-elasticsearch plugin. For more information, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html . RsyslogElasticsearchTlsCACert Contains the content of the CA cert for the CA that issued the Elasticsearch server cert. RsyslogElasticsearchTlsClientCert Contains the content of the client cert for doing client cert authorization against Elasticsearch. RsyslogElasticsearchTlsClientKey Contains the content of the private key corresponding to the cert RsyslogElasticsearchTlsClientCert . 4.4. Managing logs The containerized service log files are stored in /var/log/containers/<service> , for example /var/log/containers/cinder . Log files from services running inside containers are stored locally. The available logs can be different based on enabled and disabled services. The following example forces the logrotate task to create a new log file when it reaches 10 megabytes and retains the log file for 14 days. To customize log rotation parameters include these parameter_defaults in the environment template, then deploy the overcloud. Verification: On any overcloud node, ensure that logrotate_crond is updated: In the following example, that the nova-compute.log file has been rotated once. The log file rotation process occurs in the logrotate_crond container. The /var/spool/cron/root configuration file is read and the configuration sent to the process. Verification: Ensure that the configuration exists on any controller node: The /var/lib/config-data/puppet-generated/crond/etc/logrotate-crond.conf file is bound to /etc/logrotate-crond.conf inside the logrotate_crond container. Note Old configuration files are left in place for historical reasons, but they are not used. 4.5. Overriding the default path for a log file If you modify the default containers and the modification includes the path to the service log file, you must also modify the default log file path. Every composable service has a <service_name>LoggingSource parameter. For example, for the nova-compute service, the parameter is NovaComputeLoggingSource . Procedure To override the default path for the nova-compute service, add the path to the NovaComputeLoggingSource parameter in your configuration file: Note For each service, define the tag and file . Other values are derived by default. You can modify the format for a specific service. This passes directly to the Rsyslog configuration. The default format for the LoggingDefaultFormat parameter is /(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+) (?<pid>\d+) (?<priority>\S+) (?<message>.*)USD/ Use the following syntax: The following snippet is an example of a more complex transformation: 4.6. Modifying the format of a log record You can modify the format of the start of the log record for a specific service. This passes directly to the Rsyslog configuration. The default format for the Red Hat OpenStack Platform (RHOSP) log record is ('^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '). Procedure To add a different regular expression for parsing the start of log records, add startmsg.regex to the configuration: 4.7. Testing the connection between Rsyslog and Elasticsearch On the client side, you can verify communication between Rsyslog and Elasticsearch. Procedure Navigate to the Elasticsearch connection log file, /var/log/rsyslog/omelasticsearch.log in the Rsyslog container or /var/log/containers/rsyslog/omelasticsearch.log on the host. If this log file does not exist or if the log file exists but does not contain logs, there is no connection problem. If the log file is present and contains logs, Rsyslog has not connected successfully. Note To test the connection from the server side, view the Elasticsearch logs for connection issues. 4.8. Server-side logging If you have an Elasticsearch cluster running, you must configure the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file to connect Rsyslog that is running on overcloud nodes. To configure the RsyslogElasticsearchSetting parameter, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html 4.9. Tracebacks When you encounter an issue and you start troubleshooting, you can use a traceback log to diagnose the issue. In log files, tracebacks usually have several lines of information, all relating to the same issue. Rsyslog provides a regular expression to define how a log record starts. Each log record usually starts with a timestamp and the first line of the traceback is the only line that contains this information. Rsyslog bundles the indented records with the first line and sends them as one log record. For that behaviour configuration option startmsg.regex in <Service>LoggingSource is used. The following regular expression is the default value for all <service>LoggingSource parameters in director: When this default does not match log records of your added or modified LoggingSource , you must change startmsg.regex accordingly. 4.10. Location of log files for OpenStack services Each OpenStack component has a separate logging directory containing files specific to a running service. 4.10.1. Bare Metal Provisioning (ironic) log files Service Service name Log path OpenStack Ironic API openstack-ironic-api.service /var/log/containers/ironic/ironic-api.log OpenStack Ironic Conductor openstack-ironic-conductor.service /var/log/containers/ironic/ironic-conductor.log 4.10.2. Block Storage (cinder) log files Service Service name Log path Block Storage API openstack-cinder-api.service /var/log/containers/cinder-api.log Block Storage Backup openstack-cinder-backup.service /var/log/containers/cinder/backup.log Informational messages The cinder-manage command /var/log/containers/cinder/cinder-manage.log Block Storage Scheduler openstack-cinder-scheduler.service /var/log/containers/cinder/scheduler.log Block Storage Volume openstack-cinder-volume.service /var/log/containers/cinder/volume.log 4.10.3. Compute (nova) log files Service Service name Log path OpenStack Compute API service openstack-nova-api.service /var/log/containers/nova/nova-api.log OpenStack Compute certificate server openstack-nova-cert.service /var/log/containers/nova/nova-cert.log OpenStack Compute service openstack-nova-compute.service /var/log/containers/nova/nova-compute.log OpenStack Compute Conductor service openstack-nova-conductor.service /var/log/containers/nova/nova-conductor.log OpenStack Compute VNC console authentication server openstack-nova-consoleauth.service /var/log/containers/nova/nova-consoleauth.log Informational messages nova-manage command /var/log/containers/nova/nova-manage.log OpenStack Compute NoVNC Proxy service openstack-nova-novncproxy.service /var/log/containers/nova/nova-novncproxy.log OpenStack Compute Scheduler service openstack-nova-scheduler.service /var/log/containers/nova/nova-scheduler.log 4.10.4. Dashboard (horizon) log files Service Service name Log path Log of certain user interactions Dashboard interface /var/log/containers/horizon/horizon.log The Apache HTTP server uses several additional log files for the Dashboard web interface, which you can access by using a web browser or command-line client, for example, keystone and nova. The log files in the following table can be helpful in tracking the use of the Dashboard and diagnosing faults: Purpose Log path All processed HTTP requests /var/log/containers/httpd/horizon_access.log HTTP errors /var/log/containers/httpd/horizon_error.log Admin-role API requests /var/log/containers/httpd/keystone_wsgi_admin_access.log Admin-role API errors /var/log/containers/httpd/keystone_wsgi_admin_error.log Member-role API requests /var/log/containers/httpd/keystone_wsgi_main_access.log Member-role API errors /var/log/containers/httpd/keystone_wsgi_main_error.log Note There is also /var/log/containers/httpd/default_error.log , which stores errors reported by other web services that are running on the same host. 4.10.5. Identity Service (keystone) log files Service Service name Log Path OpenStack Identity Service openstack-keystone.service /var/log/containers/keystone/keystone.log 4.10.6. Image Service (glance) log files Service Service name Log path OpenStack Image Service API server openstack-glance-api.service /var/log/containers/glance/api.log OpenStack Image Service Registry server openstack-glance-registry.service /var/log/containers/glance/registry.log 4.10.7. Networking (neutron) log files Service Service name Log path OpenStack Neutron DHCP Agent neutron-dhcp-agent.service /var/log/containers/neutron/dhcp-agent.log OpenStack Networking Layer 3 Agent neutron-l3-agent.service /var/log/containers/neutron/l3-agent.log Metadata agent service neutron-metadata-agent.service /var/log/containers/neutron/metadata-agent.log Metadata namespace proxy n/a /var/log/containers/neutron/neutron-ns-metadata-proxy- UUID .log Open vSwitch agent neutron-openvswitch-agent.service /var/log/containers/neutron/openvswitch-agent.log OpenStack Networking service neutron-server.service /var/log/containers/neutron/server.log 4.10.8. Object Storage (swift) log files OpenStack Object Storage sends logs to the system logging facility only. Note By default, all Object Storage log files go to /var/log/containers/swift/swift.log , using the local0, local1, and local2 syslog facilities. The log messages of Object Storage are classified into two broad categories: those by REST API services and those by background daemons. The API service messages contain one line per API request, in a manner similar to popular HTTP servers; both the frontend (Proxy) and backend (Account, Container, Object) services post such messages. The daemon messages are less structured and typically contain human-readable information about daemons performing their periodic tasks. However, regardless of which part of Object Storage produces the message, the source identity is always at the beginning of the line. Here is an example of a proxy message: Here is an example of ad-hoc messages from background daemons: 4.10.9. Orchestration (heat) log files Service Service name Log path OpenStack Heat API Service openstack-heat-api.service /var/log/containers/heat/heat-api.log OpenStack Heat Engine Service openstack-heat-engine.service /var/log/containers/heat/heat-engine.log Orchestration service events n/a /var/log/containers/heat/heat-manage.log 4.10.10. Shared Filesystem Service (manila) log files Service Service name Log path OpenStack Manila API Server openstack-manila-api.service /var/log/containers/manila/api.log OpenStack Manila Scheduler openstack-manila-scheduler.service /var/log/containers/manila/scheduler.log OpenStack Manila Share Service openstack-manila-share.service /var/log/containers/manila/share.log Note Some information from the Manila Python library can also be logged in /var/log/containers/manila/manila-manage.log . 4.10.11. Telemetry (ceilometer) log files Service Service name Log path OpenStack ceilometer notification agent ceilometer_agent_notification /var/log/containers/ceilometer/agent-notification.log OpenStack ceilometer central agent ceilometer_agent_central /var/log/containers/ceilometer/central.log OpenStack ceilometer collection openstack-ceilometer-collector.service /var/log/containers/ceilometer/collector.log OpenStack ceilometer compute agent ceilometer_agent_compute /var/log/containers/ceilometer/compute.log 4.10.12. Log files for supporting services The following services are used by the core OpenStack components and have their own log directories and files. Service Service name Log path Message broker (RabbitMQ) rabbitmq-server.service /var/log/rabbitmq/rabbit@ short_hostname .log /var/log/rabbitmq/rabbit@ short_hostname -sasl.log (for Simple Authentication and Security Layer related log messages) Database server (MariaDB) mariadb.service /var/log/mariadb/mariadb.log Virtual network switch (Open vSwitch) openvswitch-nonetwork.service /var/log/openvswitch/ovsdb-server.log /var/log/openvswitch/ovs-vswitchd.log 4.10.13. aodh (alarming service) log files Service Container name Log path Alarming API aodh_api /var/log/containers/httpd/aodh-api/aodh_wsgi_access.log Alarm evaluator log aodh_evaluator /var/log/containers/aodh/aodh-evaluator.log Alarm listener aodh_listener /var/log/containers/aodh/aodh-listener.log Alarm notification aodh_notifier /var/log/containers/aodh/aodh-notifier.log 4.10.14. gnocchi (metric storage) log files Service Container name Log path Gnocchi API gnocchi_api /var/log/containers/httpd/gnocchi-api/gnocchi_wsgi_access.log Gnocchi metricd gnocchi_metricd /var/log/containers/gnocchi/gnocchi-metricd.log Gnocchi statsd gnocchi_statsd /var/log/containers/gnocchi/gnocchi-statsd.log | [
"openstack overcloud deploy <existing_overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment-rsyslog.yaml",
"parameter_defaults: RsyslogElasticsearchSetting: uid: \"elastic\" pwd: \"yourownpassword\" skipverifyhost: \"on\" allowunsignedcerts: \"on\" server: \"https://log-store-service-telemetry.apps.stfcloudops1.lab.upshift.rdu2.redhat.com\" serverport: 443",
"parameter_defaults LogrotateRotate: '14' LogrotatePurgeAfterDays: '14' LogrotateMaxsize: '10M'",
"openstack overcloud deploy --timeout 100 --templates /usr/share/openstack-tripleo-heat-templates ... -e /home/stack/templates/rotate.yaml --log-file overcloud_deployment_90.log",
"podman exec -it logrotate_crond cat /etc/logrotate-crond.conf /var/log/containers/*/*log /var/log/containers/*/*/*log /var/log/containers/*/*err { daily rotate 14 maxage 14 # minsize 1 is required for GDPR compliance, all files in # /var/log/containers not managed with logrotate will be purged! minsize 1 # Do not use size as it's not compatible with time-based rotation rules # required for GDPR compliance. maxsize 10M missingok notifempty copytruncate delaycompress compress }",
"ls -lah /var/log/containers/nova/ total 48K drwxr-x---. 2 42436 42436 79 May 12 09:01 . drwxr-x---. 7 root root 82 Jan 21 2021 .. -rw-r--r--. 1 42436 42436 12K May 12 14:00 nova-compute.log -rw-r--r--. 1 42436 42436 33K May 12 09:01 nova-compute.log.1 -rw-r--r--. 1 42436 42436 0 Jan 21 2021 nova-manage.log",
"podman exec -it logrotate_crond /bin/sh ()[root@9410925fded9 /]USD cat /var/spool/cron/root HEADER: This file was autogenerated at 2021-01-21 16:47:27 +0000 by puppet. HEADER: While it can still be managed manually, it is definitely not recommended. HEADER: Note particularly that the comments starting with 'Puppet Name' should HEADER: not be deleted, as doing so could cause duplicate cron jobs. Puppet Name: logrotate-crond PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh 0 * * * * sleep `expr USD{RANDOM} \\% 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond",
"NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log",
"<service_name>LoggingSource: tag: <service_name>.tag path: <service_name>.path format: <service_name>.format",
"ServiceLoggingSource: tag: openstack.Service path: /var/log/containers/service/service.log format: multiline format_firstline: '/^\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}.\\d{3} \\d+ \\S+ \\S+ \\[(req-\\S+ \\S+ \\S+ \\S+ \\S+ \\S+|-)\\]/' format1: '/^(?<Timestamp>\\S+ \\S+) (?<Pid>\\d+) (?<log_level>\\S+) (?<python_module>\\S+) (\\[(req-(?<request_id>\\S+) (?<user_id>\\S+) (?<tenant_id>\\S+) (?<domain_id>\\S+) (?<user_domain>\\S+) (?<project_domain>\\S+)|-)\\])? (?<Payload>.*)?USD/'",
"NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log startmsg.regex: \"^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ \\\\+[0-9]+)? [A-Z]+ \\\\([a-z]+\\\\)",
"startmsg.regex='^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '",
"Apr 20 15:20:34 rhev-a24c-01 proxy-server: 127.0.0.1 127.0.0.1 20/Apr/2015/19/20/34 GET /v1/AUTH_zaitcev%3Fformat%3Djson%26marker%3Dtestcont HTTP/1.0 200 - python-swiftclient-2.1.0 AUTH_tk737d6... - 2 - txc454fa8ea4844d909820a-0055355182 - 0.0162 - - 1429557634.806570053 1429557634.822791100",
"Apr 27 17:08:15 rhev-a24c-02 object-auditor: Object audit (ZBF). Since Mon Apr 27 21:08:15 2015: Locally: 1 passed, 0 quarantined, 0 errors files/sec: 4.34 , bytes/sec: 0.00, Total time: 0.23, Auditing time: 0.00, Rate: 0.00 Apr 27 17:08:16 rhev-a24c-02 object-auditor: Object audit (ZBF) \"forever\" mode completed: 0.56s. Total quarantined: 0, Total errors: 0, Total files/sec: 14.31, Total bytes/sec: 0.00, Auditing time: 0.02, Rate: 0.04 Apr 27 17:08:16 rhev-a24c-02 account-replicator: Beginning replication run Apr 27 17:08:16 rhev-a24c-02 account-replicator: Replication run OVER Apr 27 17:08:16 rhev-a24c-02 account-replicator: Attempted to replicate 5 dbs in 0.12589 seconds (39.71876/s) Apr 27 17:08:16 rhev-a24c-02 account-replicator: Removed 0 dbs Apr 27 17:08:16 rhev-a24c-02 account-replicator: 10 successes, 0 failures"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/operational_measurements/installing-and-configuring-the-logs-service_assembly |
13.2.28. Managing the SSSD Cache | 13.2.28. Managing the SSSD Cache SSSD can define multiple domains of the same type and different types of domain. SSSD maintains a separate database file for each domain, meaning each domain has its own cache. These cache files are stored in the /var/lib/sss/db/ directory. Purging the SSSD Cache As LDAP updates are made to the identity provider for the domains, it can be necessary to clear the cache to reload the new information quickly. The cache purge utility, sss_cache , invalidates records in the SSSD cache for a user, a domain, or a group. Invalidating the current records forces the cache to retrieve the updated records from the identity provider, so changes can be realized quickly. Note This utility is included with SSSD in the sssd package. Most commonly, this is used to clear the cache and update all records: The sss_cache command can also clear all cached entries for a particular domain: If the administrator knows that a specific record (user, group, or netgroup) has been updated, then sss_cache can purge the records for that specific account and leave the rest of the cache intact: Table 13.12. Common sss_cache Options Short Argument Long Argument Description -E --everything Invalidates all cached entries with the exception of sudo rules. -d name --domain name Invalidates cache entries for users, groups, and other entries only within the specified domain. -G --groups Invalidates all group records. If -g is also used, -G takes precedence and -g is ignored. -g name --group name Invalidates the cache entry for the specified group. -N --netgroups Invalidates cache entries for all netgroup cache records. If -n is also used, -N takes precedence and -n is ignored. -n name --netgroup name Invalidates the cache entry for the specified netgroup. -U --users Invalidates cache entries for all user records. If the -u option is also used, -U takes precedence and -u is ignored. -u name --user name Invalidates the cache entry for the specified user. Deleting Domain Cache Files All cache files are named for the domain. For example, for a domain named exampleldap , the cache file is named cache_exampleldap.ldb . Be careful when you delete a cache file. This operation has significant effects: Deleting the cache file deletes all user data, both identification and cached credentials. Consequently, do not delete a cache file unless the system is online and can authenticate with a user name against the domain's servers. Without a credentials cache, offline authentication will fail. If the configuration is changed to reference a different identity provider, SSSD will recognize users from both providers until the cached entries from the original provider time out. It is possible to avoid this by purging the cache, but the better option is to use a different domain name for the new provider. When SSSD is restarted, it creates a new cache file with the new name and the old file is ignored. | [
"~]# sss_cache -E",
"~]# sss_cache -Ed LDAP1",
"~]# sss_cache -u jsmith"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-cache |
Chapter 5. Configuring the JAVA_HOME environment variable on RHEL | Chapter 5. Configuring the JAVA_HOME environment variable on RHEL Some applications require you to set the JAVA_HOME environment variable so that they can find the Red Hat build of OpenJDK installation. Prerequisites You know where you installed Red Hat build of OpenJDK on your system. For example, /opt/jdk/11 . Procedure Set the value of JAVA_HOME . Verify that JAVA_HOME is set correctly. Note You can make the value of JAVA_HOME persistent by exporting the environment variable in ~/.bashrc for single users or /etc/bashrc for system-wide settings. Persistent means that if you close your terminal or reboot your computer, you do not need to reset a value for the JAVA_HOME environment variable. The following example demonstrates using a text editor to enter commands for exporting JAVA_HOME in ~/.bashrc for a single user: Additional resources Be aware of the exact meaning of JAVA_HOME . For more information, see Changes/Decouple system java setting from java command setting . | [
"export JAVA_HOME=/opt/jdk/11",
"printenv | grep JAVA_HOME JAVA_HOME=/opt/jdk/11",
"> vi ~/.bash_profile export JAVA_HOME=/opt/jdk/11 export PATH=\"USDJAVA_HOME/bin:USDPATH\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_for_rhel/configuring-javahome-environment-variable-on-rhel |
7.129. libwacom | 7.129. libwacom 7.129.1. RHEA-2013:0333 - libwacom enhancement update Updated libwacom packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The libwacom packages contain a library that provides access to a tablet model database. The libwacom packages expose the contents of this database to applications, allowing for tablet-specific user interfaces. The libwacom packages allow the GNOME tools to automatically configure screen mappings and calibrations, and provide device-specific configurations. Enhancement BZ#857073 Previously, the Wacom Cintiq 22HD graphics tablet was not supported by the libwacom library. Consequently, this specific type of graphics tablet was not recognized by the system. This update adds the support for Wacom Cintiq 22HD, which can be now used without complications. All users of libwacom are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libwacom |
Chapter 11. Expanding the cluster | Chapter 11. Expanding the cluster You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API. Additional resources API connectivity failure when adding nodes to a cluster Configuring multi-architecture compute machines on an OpenShift cluster 11.1. Checking for multi-architecture support You must check that your cluster can support multiple architectures before you add a node with a different architecture. Procedure Log in to the cluster using the CLI. Check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o json | jq .metadata.metadata Verification If you see the following output, your cluster supports multiple architectures: { "release.openshift.io/architecture": "multi" } 11.2. Installing a multi-architecture cluster A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads. For example, you can add arm64, IBM Power(R), or IBM Z(R) worker nodes to an existing OpenShift Container Platform cluster with an x86_64. The main steps of the installation are as follows: Create and register a multi-architecture cluster. Create an x86_64 infrastructure environment, download the ISO discovery image for x86_64, and add the control plane. The control plane must have the x86_64 architecture. Create an arm64, IBM Power(R), or IBM Z(R) infrastructure environment, download the ISO discovery images for arm64, IBM Power(R), or IBM Z(R), and add the worker nodes. Supported platforms The following table lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing. OpenShift Container Platform version Supported platforms Day 1 control plane architecture Day 2 node architecture 4.12.0 Microsoft Azure (TP) x86_64 arm64 4.13.0 Microsoft Azure Amazon Web Services Bare metal (TP) x86_64 x86_64 x86_64 arm64 arm64 arm64 4.14.0 Microsoft Azure Amazon Web Services Bare metal Google Cloud Platform IBM Power(R) IBM Z(R) x86_64 x86_64 x86_64 x86_64 x86_64 x86_64 arm64 arm64 arm64 arm64 ppc64le s390x Important Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Main steps Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "<version-number>-multi", 1 "cpu_architecture" : "multi" 2 "control_plane_count": "<number>" 3 "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Use the multi- option for the OpenShift Container Platform version number; for example, "4.12-multi" . 2 Set the CPU architecture to "multi" . 3 Set the number of control plane nodes to "3", "4", or "5". {sno-full} is not supported for a multi-cluster architecture. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "x86_64" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' When you reach the "Adding hosts" step of the installation, set host_role to master : Note For more information, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"master" } ' | jq Download the discovery image for the x86_64 architecture. Boot the x86_64 architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power(R)), s390x (for IBM Z(R)), or arm64 . For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.12", "cpu_architecture" : "arm64" "control_plane_count": "3" "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Repeat the "Adding hosts" step of the installation. This time, set host_role to worker : Note For more details, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Download the discovery image for the arm64, ppc64le or s390x architecture. Boot the architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Verification View the arm64, ppc64le, or s390x worker nodes in the cluster by running the following command: USD oc get nodes -o wide 11.3. Adding hosts with the web console You can add hosts to clusters that were created using the Assisted Installer . Important Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and later. When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks. Procedure Log in to OpenShift Cluster Manager and click the cluster that you want to expand. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed. Optional: Modify ignition files as needed. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. Select the host role. It can be either a worker or a control plane host. Start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation. When the host is successfully installed, it is listed as a host in the cluster web console. Important New hosts will be encrypted using the same method as the original cluster. 11.4. Adding hosts with the API You can add hosts to clusters using the Assisted Installer REST API. Prerequisites Install the Red Hat OpenShift Cluster Manager CLI ( ocm ). Log in to Red Hat OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you want to expand. Important When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks. Procedure Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the cluster by running the following commands: Set the USDCLUSTER_ID variable: Log in to the cluster and run the following command: USD export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Display the USDCLUSTER_ID variable output: USD echo USD{CLUSTER_ID} Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDCLUSTER_ID" \ '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": "<cluster_id>", 2 "name": "<openshift_cluster_name>" 3 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com . 2 Replace <cluster_id> with the USDCLUSTER_ID output from the substep. 3 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster host by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new host, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{API_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{API_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new host was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0 11.5. Replacing a control plane node in a healthy cluster You can replace a control plane (master) node in a healthy OpenShift Container Platform cluster that has three to five control plane nodes, by adding a new control plane node and removing an existing control plane node. If the cluster is unhealthy, you must peform additional operations before you can manage the control plane nodes. See Replacing a control plane node in an unhealthy cluster for more information. 11.5.1. Adding a new control plane node Add the new control plane node, and verify that it is healthy. In the example below, the new node is node-5 . Prerequisites You are using OpenShift Container Platform 4.11 or later. You have installed a healthy cluster with at least three control plane nodes. You have created a single control plane node to be added to the cluster for Day 2. Procedure Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node: USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending Approve all pending CSRs for the new node ( node-5 in this example): USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Important You must approve the CSRs to complete the installation. Confirm that the new control plane node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 Ready master 4h42m v1.24.0+3882f8f node-1 Ready master 4h27m v1.24.0+3882f8f node-2 Ready master 4h43m v1.24.0+3882f8f node-3 Ready worker 4h29m v1.24.0+3882f8f node-4 Ready worker 4h30m v1.24.0+3882f8f node-5 Ready master 105s v1.24.0+3882f8f Note The etcd operator requires a Machine custom resource (CR) that references the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three or more control plane nodes. Create the BareMetalHost and Machine CRs and link them to the new control plane's Node CR. Create the BareMetalHost CR with a unique .metadata.name value: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api Apply the BareMetalHost CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the BareMetalHost CR. Create the Machine CR using the unique .metadata.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: <cluster_name> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed 1 Replace <cluster_name> with the name of the specific cluster, for example, test-day2-1-6qv96 . To get the cluster name, run the following command: USD oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{"\n"}' Apply the Machine CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the Machine CR. Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: Copy the link-machine-and-node.sh script below to a local machine: #!/bin/bash # Credit goes to # https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object # and Node object. This is needed # in order to have IP address of # the Node present in the status of the Machine. set -e machine="USD1" node="USD2" if [ -z "USDmachine" ] || [ -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi node_name=USD(echo "USD{node}" | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo "USD{1}" \ | jq '.[] | select(. | .type == "InternalIP") | .address' \ | sed 's/"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob="" fi cat <<- EOF { "ip": "USD{ips[USDi]}", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\nTimed out waiting for %s' "USD{name}" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api "USD{node_name}" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json "USD{machine}") host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') set +e read -r -d '' host_patch << EOF { "status": { "hardware": { "hostname": "USD{hostname}", "nics": [ USD(print_nics "USD{addresses}") ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } EOF set -e echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ "USD{HOST_PROXY_API_PATH}/USD{host}/status" \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" Make the script executable: USD chmod +x link-machine-and-node.sh Run the script: USD bash link-machine-and-node.sh node-5 node-5 Note The first node-5 instance represents the machine, and the second represents the node. Confirm members of etcd by executing into one of the pre-existing control plane nodes: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-0 List etcd members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |76ae1d00| started |node-0 |192.168.111.24|192.168.111.24| false | |2c18942f| started |node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started |node-2 |192.168.111.25|192.168.111.25| false | |ead5f280| started |node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Monitor the etcd operator configuration process until completion: USD oc get clusteroperator etcd Example output (upon completion) NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m Confirm etcd health by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-0 Check endpoint health: # etcdctl endpoint health Example output 192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms Verify that all nodes are ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 Ready master 6h20m v1.24.0+3882f8f node-1 Ready master 6h20m v1.24.0+3882f8f node-2 Ready master 6h4m v1.24.0+3882f8f node-3 Ready worker 6h7m v1.24.0+3882f8f node-4 Ready worker 6h7m v1.24.0+3882f8f node-5 Ready master 99m v1.24.0+3882f8f Verify that the cluster Operators are all available: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m Verify that the cluster version is correct: USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5 11.5.2. Removing the existing control plane node Remove the control plane node that you are replacing. This is node-0 in the example below. Prerequisites You have added a new healthy control plane node. Procedure Delete the BareMetalHost CR of the pre-existing control plane node: USD oc delete bmh -n openshift-machine-api node-0 Confirm that the machine is unhealthy: USD oc get machine -A Example output NAMESPACE NAME PHASE AGE openshift-machine-api node-0 Failed 20h openshift-machine-api node-1 Running 20h openshift-machine-api node-2 Running 20h openshift-machine-api node-3 Running 19h openshift-machine-api node-4 Running 19h openshift-machine-api node-5 Running 14h Delete the Machine CR: USD oc delete machine -n openshift-machine-api node-0 machine.machine.openshift.io "node-0" deleted Confirm removal of the Node CR: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 19h v1.24.0+3882f8f node-3 Ready worker 19h v1.24.0+3882f8f node-4 Ready worker 19h v1.24.0+3882f8f node-5 Ready master 15h v1.24.0+3882f8f Check etcd-operator logs to confirm status of the etcd cluster: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource Remove the physical machine to allow the etcd operator to reconcile the cluster members: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-1 Monitor the progress of etcd operator reconciliation by checking members and endpoint health: # etcdctl member list -w table; etcdctl endpoint health Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started | node-2 |192.168.111.25|192.168.111.25| false | |ead4f280| started | node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms 11.6. Replacing a control plane node in an unhealthy cluster You can replace an unhealthy control plane (master) node in an OpenShift Container Platform cluster that has three to five control plane nodes, by removing the unhealthy control plane node and adding a new one. For details on replacing a control plane node in a healthy cluster, see Replacing a control plane node in a healthy cluster . 11.6.1. Removing an unhealthy control plane node Remove the unhealthy control plane node from the cluster. This is node-0 in the example below. Prerequisites You have installed a cluster with at least three control plane nodes. At least one of the control plane nodes is not ready. Procedure Check the node status to confirm that a control plane node is not ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 NotReady master 20h v1.24.0+3882f8f node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f Confirm in the etcd-operator logs that the cluster is unhealthy: USD oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operator Example output E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthy Confirm the etcd members by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl endpoint health reports an unhealthy member of the cluster: # etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy control plane by deleting the Machine custom resource (CR): USD oc delete machine -n openshift-machine-api node-0 Note The Machine and Node CRs might not be deleted because they are protected by finalizers. If this occurs, you must delete the Machine CR manually by removing all finalizers. Verify in the etcd-operator logs whether the unhealthy machine has been removed: USD oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operator Example output I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}] If you see that removal has been skipped, as in the above log example, manually remove the unhealthy etcdctl member: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl endpoint health reports an unhealthy member of the cluster: # etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy etcdctl member from the cluster: # etcdctl member remove 61e2a86084aafa62 Example output Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7 Verify that the unhealthy etcdctl member was removed by running the following command: # etcdctl member list -w table Example output +----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started | node-1 |192.168.111.26|192.168.111.26| false | | ead4f280 | started | node-2 |192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+ 11.6.2. Adding a new control plane node Add a new control plane node to replace the unhealthy node that you removed. In the example below, the new node is node-5 . Prerequisites You have installed a control plane node for Day 2. For more information, see Adding hosts with the web console or Adding hosts with the API . Procedure Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node: USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending Approve all pending CSRs for the new node ( node-5 in this example): USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note You must approve the CSRs to complete the installation. Confirm that the control plane node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 2m52s v1.24.0+3882f8f The etcd operator requires a Machine CR referencing the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three control plane nodes. Create the BareMetalHost and Machine CRs and link them to the new control plane's Node CR. Important Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors in the etcd operator. Create the BareMetalHost CR with a unique .metadata.name value: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api Apply the BareMetalHost CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the BareMetalHost CR. Create the Machine CR using the unique .metadata.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed Apply the Machine CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the Machine CR. Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: Copy the link-machine-and-node.sh script below to a local machine: #!/bin/bash # Credit goes to # https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object # and Node object. This is needed # in order to have IP address of # the Node present in the status of the Machine. set -e machine="USD1" node="USD2" if [ -z "USDmachine" ] || [ -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi node_name=USD(echo "USD{node}" | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo "USD{1}" \ | jq '.[] | select(. | .type == "InternalIP") | .address' \ | sed 's/"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob="" fi cat <<- EOF { "ip": "USD{ips[USDi]}", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\nTimed out waiting for %s' "USD{name}" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api "USD{node_name}" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json "USD{machine}") host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') set +e read -r -d '' host_patch << EOF { "status": { "hardware": { "hostname": "USD{hostname}", "nics": [ USD(print_nics "USD{addresses}") ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } EOF set -e echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ "USD{HOST_PROXY_API_PATH}/USD{host}/status" \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" Make the script executable: USD chmod +x link-machine-and-node.sh Run the script: USD bash link-machine-and-node.sh node-5 node-5 Note The first node-5 instance represents the machine, and the second represents the node. Confirm members of etcd by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started| node-1 |192.168.111.26|192.168.111.26| false | | ead4f280|started| node-2 |192.168.111.28|192.168.111.28| false | | 79153c5a|started| node-5 |192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+ Monitor the etcd operator configuration process until completion: USD oc get clusteroperator etcd Example output (upon completion) NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h Confirm etcdctl health by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 Check endpoint health: # etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms Confirm the health of the nodes: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 40m v1.24.0+3882f8f Verify that the cluster Operators are all available: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h Verify that the cluster version is correct: USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5 11.7. Additional resources Authenticating with the REST API | [
"oc adm release info -o json | jq .metadata.metadata",
"{ \"release.openshift.io/architecture\": \"multi\" }",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"control_plane_count\": \"<number>\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"control_plane_count\": \"3\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq",
"oc get nodes -o wide",
"export API_URL=<api_url> 1",
"export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"echo USD{CLUSTER_ID}",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": \"<cluster_id>\", 2 \"name\": \"<openshift_cluster_name>\" 3 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 Ready master 4h42m v1.24.0+3882f8f node-1 Ready master 4h27m v1.24.0+3882f8f node-2 Ready master 4h43m v1.24.0+3882f8f node-3 Ready worker 4h29m v1.24.0+3882f8f node-4 Ready worker 4h30m v1.24.0+3882f8f node-5 Ready master 105s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc apply -f <filename> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: <cluster_name> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{\"\\n\"}'",
"oc apply -f <filename> 1",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" ] || [ -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi node_name=USD(echo \"USD{node}\" | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo \"USD{1}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob=\"\" fi cat <<- EOF { \"ip\": \"USD{ips[USDi]}\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\\nTimed out waiting for %s' \"USD{name}\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api \"USD{node_name}\" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json \"USD{machine}\") host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') set +e read -r -d '' host_patch << EOF { \"status\": { \"hardware\": { \"hostname\": \"USD{hostname}\", \"nics\": [ USD(print_nics \"USD{addresses}\") ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } EOF set -e echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH \"USD{HOST_PROXY_API_PATH}/USD{host}/status\" -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"chmod +x link-machine-and-node.sh",
"bash link-machine-and-node.sh node-5 node-5",
"oc rsh -n openshift-etcd etcd-node-0",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |76ae1d00| started |node-0 |192.168.111.24|192.168.111.24| false | |2c18942f| started |node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started |node-2 |192.168.111.25|192.168.111.25| false | |ead5f280| started |node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m",
"oc rsh -n openshift-etcd etcd-node-0",
"etcdctl endpoint health",
"192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 Ready master 6h20m v1.24.0+3882f8f node-1 Ready master 6h20m v1.24.0+3882f8f node-2 Ready master 6h4m v1.24.0+3882f8f node-3 Ready worker 6h7m v1.24.0+3882f8f node-4 Ready worker 6h7m v1.24.0+3882f8f node-5 Ready master 99m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5",
"oc delete bmh -n openshift-machine-api node-0",
"oc get machine -A",
"NAMESPACE NAME PHASE AGE openshift-machine-api node-0 Failed 20h openshift-machine-api node-1 Running 20h openshift-machine-api node-2 Running 20h openshift-machine-api node-3 Running 19h openshift-machine-api node-4 Running 19h openshift-machine-api node-5 Running 14h",
"oc delete machine -n openshift-machine-api node-0 machine.machine.openshift.io \"node-0\" deleted",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 19h v1.24.0+3882f8f node-3 Ready worker 19h v1.24.0+3882f8f node-4 Ready worker 19h v1.24.0+3882f8f node-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource",
"oc rsh -n openshift-etcd etcd-node-1",
"etcdctl member list -w table; etcdctl endpoint health",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started | node-2 |192.168.111.25|192.168.111.25| false | |ead4f280| started | node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 NotReady master 20h v1.24.0+3882f8f node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operator",
"E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthy",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"oc delete machine -n openshift-machine-api node-0",
"oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operator",
"I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}]",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"etcdctl member remove 61e2a86084aafa62",
"Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7",
"etcdctl member list -w table",
"+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started | node-1 |192.168.111.26|192.168.111.26| false | | ead4f280 | started | node-2 |192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 2m52s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc apply -f <filename> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc apply -f <filename> 1",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" ] || [ -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi node_name=USD(echo \"USD{node}\" | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo \"USD{1}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob=\"\" fi cat <<- EOF { \"ip\": \"USD{ips[USDi]}\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\\nTimed out waiting for %s' \"USD{name}\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api \"USD{node_name}\" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json \"USD{machine}\") host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') set +e read -r -d '' host_patch << EOF { \"status\": { \"hardware\": { \"hostname\": \"USD{hostname}\", \"nics\": [ USD(print_nics \"USD{addresses}\") ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } EOF set -e echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH \"USD{HOST_PROXY_API_PATH}/USD{host}/status\" -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"chmod +x link-machine-and-node.sh",
"bash link-machine-and-node.sh node-5 node-5",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started| node-1 |192.168.111.26|192.168.111.26| false | | ead4f280|started| node-2 |192.168.111.28|192.168.111.28| false | | 79153c5a|started| node-5 |192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h",
"oc rsh -n openshift-etcd node-1",
"etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 40m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster |
Chapter 3. Post-installation machine configuration tasks | Chapter 3. Post-installation machine configuration tasks There are times when you need to make changes to the operating systems running on OpenShift Container Platform nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way. Aside from a few specialized features, most changes to operating systems on OpenShift Container Platform nodes can be done by creating what are referred to as MachineConfig objects that are managed by the Machine Config Operator. Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OpenShift Container Platform nodes. 3.1. Understanding the Machine Config Operator 3.1.1. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Project openshift-machine-config-operator 3.1.2. Machine config overview The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a MachineConfig CRD that can write configuration files onto the host (see machine-config-operator ). Understanding what MCO does and how it interacts with other components is critical to making advanced, system-level changes to an OpenShift Container Platform cluster. Here are some things you should know about MCO, machine configs, and how they are used: A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OpenShift Container Platform nodes. MCO applies changes to operating systems in pools of machines. All OpenShift Container Platform clusters start with worker and control plane node (also known as the master node) pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types. Important A node can have multiple labels applied that indicate its type, such as master or worker , however it can be a member of only a single machine config pool. Some machine configuration must be in place before OpenShift Container Platform is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OpenShift Container Platform installer process, instead of running as a post-installation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OpenShift Container Platform installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning. MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node. Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost. MCO is only supported for writing to files in /etc and /var directories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The /opt and /usr/local directories are examples. Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details. Although Ignition config settings can be delivered directly at OpenShift Container Platform installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them. When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as degraded . It will not overwrite the offending file, however, and should continue to operate in a degraded state. A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OpenShift Container Platform cluster. The machine-api-operator provisions a new machine and MCO configures it. MCO uses Ignition as the configuration format. OpenShift Container Platform 4.6 moved from Ignition config specification version 2 to version 3. 3.1.2.1. What can you change with machine configs? The kinds of components that MCO can change include: config : Create Ignition config objects (see the Ignition configuration specification ) to do things like modify files, systemd services, and other features on OpenShift Container Platform machines, including: Configuration files : Create or overwrite files in the /var or /etc directory. systemd units : Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings. users and groups : Change SSH keys in the passwd section post-installation. Important Changing SSH keys via machine configs is only supported for the core user. kernelArguments : Add arguments to the kernel command line when OpenShift Container Platform nodes boot. kernelType : Optionally identify a non-standard kernel to use instead of the standard kernel. Use realtime to use the RT kernel (for RAN). This is only supported on select platforms. fips : Enable FIPS mode. FIPS should be set at installation-time setting and not a post-installation procedure. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. extensions : Extend RHCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules. Custom resources (for ContainerRuntime and Kubelet ) : Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings ( ContainerRuntime CR) and the Kubelet service ( Kubelet CR). The MCO is not the only Operator that can change operating system components on OpenShift Container Platform nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles. Tasks for the MCO configuration that can be done post-installation are included in the following procedures. See descriptions of RHCOS bare metal installation for system configuration tasks that must be done during or before OpenShift Container Platform installation. 3.1.2.2. Project See the openshift-machine-config-operator GitHub site for details. 3.1.3. Checking machine config pool status To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc commands: Procedure To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m where: UPDATED The True status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in the STATUS field in the oc get mcp output. The False status indicates a node in the MCP is updating. UPDATING The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT Indicates the total number of machines in that MCP. READYMACHINECOUNT Indicates the total number of machines in that MCP that are ready for scheduling. UPDATEDMACHINECOUNT Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. In the output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the UPDATEDMACHINECOUNT being 2 . There are no issues, as indicated by the DEGRADEDMACHINECOUNT being 0 and DEGRADED being False . While the nodes in the MCP are updating, the machine config listed under CONFIG is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to. Note If a node is being cordoned, that node is not included in the READYMACHINECOUNT , but is included in the MACHINECOUNT . Also, the MCP status is set to UPDATING . Because the node has the current machine config, it is counted in the UPDATEDMACHINECOUNT total: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m To check the status of the nodes in an MCP by examining the MachineConfigPool custom resource, run the following command: : USD oc describe mcp worker Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none> Note If a node is being cordoned, the node is not included in the Ready Machine Count . It is included in the Unavailable Machine Count : Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3 To see each existing MachineConfig object, run the following command: USD oc get machineconfigs Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m ... rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m Note that the MachineConfig objects listed as rendered are not meant to be changed or deleted. To view the contents of a particular machine config (in this case, 01-master-kubelet ), run the following command: USD oc describe machineconfigs 01-master-kubelet The output from the command shows that this MachineConfig object contains both configuration files ( cloud.conf and kubelet.conf ) and a systemd service (Kubernetes Kubelet): Example output Name: 01-master-kubelet ... Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous... Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube \ kubelet \ --config=/etc/kubernetes/kubelet.conf \ ... If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml to apply a machine config, you could remove that machine config by running the following command: USD oc delete -f ./myconfig.yaml If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state. If you add your own machine configs to your cluster, you can use the commands shown in the example to check their status and the related status of the pool to which they are applied. 3.2. Using MachineConfig objects to configure nodes You can use the tasks in this section to create MachineConfig objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures , enabling SCTP , and configuring iSCSI initiatornames for OpenShift Container Platform. OpenShift Container Platform version 4.7 supports Ignition specification version 3.2 . All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2. Tip Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes. 3.2.1. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create the contents of the chrony.conf file and encode it as base64. For example: USD cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF 1 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Example output ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK Create the MachineConfig object file, replacing the base64 string with the one you just created. This example adds the file to master nodes. You can change it to worker or make an additional MachineConfig for the worker role. Create MachineConfig files for each type of machine that your cluster uses: USD cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK mode: 420 1 overwrite: true path: /etc/chrony.conf osImageURL: "" EOF 1 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . Make a backup copy of the configuration files. Apply the configurations in one of two ways: If the cluster is not up yet, after you generate manifest files, add this file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-masters-chrony-configuration.yaml 3.2.2. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 3.2.3. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Important Multipathing is only supported when it is activated using the machine config as documented in the following procedure. It must be enabled after RHCOS installation. Important On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" in Installing a cluster with z/VM on IBM Z and LinuxONE . Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.7 or later. You are logged in to the cluster as a user with administrative privileges. Procedure To enable multipathing on control plane nodes (also known as the master nodes): Create a machine config file, such as 99-master-kargs-mpath.yaml , that instructs the cluster to add the master label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root To enable multipathing on worker nodes: Create a machine config file, such as 99-worker-kargs-mpath.yaml , that instructs the cluster to add the worker label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root Create the new machine config by using either the master or worker YAML file you previously created: USD oc create -f ./99-master-kargs-mpath.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 3.2.4. Adding a real-time kernel to nodes Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics. If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.7 you can make this switch using a MachineConfig object. Although making the change is as simple as changing a machine config kernelType setting to realtime , there are a few other considerations before making the change: Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use. The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8. Real-time support in OpenShift Container Platform is limited to specific subscriptions. The following procedure is also supported for use with Google Cloud Platform. Prerequisites Have a running OpenShift Container Platform cluster (version 4.4 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for the real-time kernel: Create a YAML file (for example, 99-worker-realtime.yaml ) that contains a MachineConfig object for the realtime kernel type. This example tells the cluster to use a real-time kernel for all worker nodes: USD cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 99-worker-realtime.yaml Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0 USD oc debug node/ip-10-0-143-147.us-east-2.compute.internal Example output Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux The kernel name contains rt and text "PREEMPT RT" indicates that this is a real-time kernel. To go back to the regular kernel, delete the MachineConfig object: USD oc delete -f 99-worker-realtime.yaml 3.2.5. Configuring journald settings If you need to configure settings for the journald service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config. This procedure describes how to modify journald rate limiting settings in the /etc/systemd/journald.conf file and apply them to worker nodes. See the journald.conf man page for information on how to use that file. Prerequisites Have a running OpenShift Container Platform cluster (version 4.4 or later). Log in to the cluster as a user with administrative privileges. Procedure Create the contents of the /etc/systemd/journald.conf file and encode it as base64. For example: USD cat > /tmp/jrnl.conf <<EOF # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s EOF Convert the temporary journal.conf file to base64 and save it into a variable ( jrnl_cnf ): USD export jrnl_cnf=USD( cat /tmp/jrnl.conf | base64 -w0 ) USD echo USDjrnl_cnf IyBEaXNhYmxlIHJhdGUgbGltaXRpbmcKUmF0ZUxpbWl0SW50ZXJ2YWw9MXMKUmF0ZUxpbWl0QnVyc3Q9MTAwMDAKU3RvcmFnZT12b2xhdGlsZQpDb21wcmVzcz1ubwpNYXhSZXRlbnRpb25TZWM9MzBzCg== Create the machine config, including the encoded contents of journald.conf ( jrnl_cnf variable): USD cat > /tmp/40-worker-custom-journald.yaml <<EOF apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-worker-custom-journald spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.1.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} verification: {} filesystem: root mode: 420 path: /etc/systemd/journald.conf systemd: {} osImageURL: "" EOF Apply the machine config to the pool: USD oc apply -f /tmp/40-worker-custom-journald.yaml Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied: USD oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m To check that the change was applied, you can log in to a worker node: USD oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD USD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit 3.2.6. Adding extensions to RHCOS RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions feature you can use to add a minimal set of features to RHCOS nodes. Currently, the following extension is available: usbguard : Adding the usbguard extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details. The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes. Prerequisites Have a running OpenShift Container Platform cluster (version 4.6 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for extensions: Create a YAML file (for example, 80-extensions.yaml ) that contains a MachineConfig extensions object. This example tells the cluster to add the usbguard extension. USD cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 80-extensions.yaml This sets all worker nodes to have rpm packages for usbguard installed. Check that the extensions were applied: USD oc get machineconfig 80-worker-extensions Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m Check the extensions. To check that the extension was applied, run: USD oc get node | grep worker Example output NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3 USD oc debug node/ip-10-0-169-2.us-east-2.compute.internal Example output ... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm 3.2.7. Loading custom firmware blobs in the machine config manifest Because the default location for firmware blobs in /usr/lib is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS. Procedure Create a Butane config file, 98-worker-firmware-blob.bu , that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under /var/lib/firmware . Note See "Creating machine configs with Butane" for information about Butane. Butane config file for custom firmware blob variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4 1 Sets the path on the node where the firmware package is copied to. 2 Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a files-dir directory, which must be specified by using the --files-dir option with Butane in the following step. 3 Sets the permissions for the file on the RHCOS node. It is recommended to set 0644 permissions. 4 The firmware_class.path parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses /var/lib/firmware as the customized path. Run Butane to generate a MachineConfig object file that uses a copy of the firmware blob on your local workstation named 98-worker-firmware-blob.yaml . The firmware blob contains the configuration to be delivered to the nodes. The following example uses the --files-dir option to specify the directory on your workstation where the local file or files are located: USD butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name> Apply the configurations to the nodes in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f 98-worker-firmware-blob.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. 3.3. Configuring MCO-related custom resources Besides managing MachineConfig objects, the MCO manages two custom resources (CRs): KubeletConfig and ContainerRuntimeConfig . Those CRs let you change node-level settings impacting how the Kubelet and CRI-O container runtime services behave. 3.3.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=large-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status: "True" and type:Success : spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 3.3.2. Creating a ContainerRuntimeConfig CR to edit CRI-O parameters You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). Using a ContainerRuntimeConfig custom resource (CR), you set the configuration values and add a label to match the MCP. The MCO then rebuilds the crio.conf and storage.conf configuration files on the associated nodes with the updated values. Note To revert the changes implemented by using a ContainerRuntimeConfig CR, you must delete the CR. Removing the label from the machine config pool does not revert the changes. You can modify the following settings by using a ContainerRuntimeConfig CR: PIDs limit : The pidsLimit parameter sets the CRI-O pids_limit parameter, which is maximum number of processes allowed in a container. The default is 1024 ( pids_limit = 1024 ). Log level : The logLevel parameter sets the CRI-O log_level parameter, which is the level of verbosity for log messages. The default is info ( log_level = info ). Other options include fatal , panic , error , warn , debug , and trace . Overlay size : The overlaySize parameter sets the CRI-O Overlay storage driver size parameter, which is the maximum size of a container image. Maximum log size : The logSizeMax parameter sets the CRI-O log_size_max parameter, which is the maximum size allowed for the container log file. The default is unlimited ( log_size_max = -1 ). If set to a positive number, it must be at least 8192 to not be smaller than the ConMon read buffer. ConMon is a program that monitors communications between a container manager (such as Podman or CRI-O) and the OCI runtime (such as runc or crun) for a single container. You should have one ContainerRuntimeConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one ContainerRuntimeConfig CR for all the pools. You should edit an existing ContainerRuntimeConfig CR to modify existing settings or add new settings instead of creating a new CR for each change. It is recommended to create a new ContainerRuntimeConfig CR only to modify a different machine config pool, or for changes that are intended to be temporary so that you can revert the changes. You can create multiple ContainerRuntimeConfig CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig CR, the MCO creates a machine config appended with containerruntime . With each subsequent CR, the controller creates a new containerruntime machine config with a numeric suffix. For example, if you have a containerruntime machine config with a -2 suffix, the containerruntime machine config is appended with -3 . If you want to delete the machine configs, you should delete them in reverse order to avoid exceeding the limit. For example, you should delete the containerruntime-3 machine config before deleting the containerruntime-2 machine config. Note If you have a machine config with a containerruntime-9 suffix, and you create another ContainerRuntimeConfig CR, a new machine config is not created, even if there are fewer than 10 containerruntime machine configs. Example showing multiple ContainerRuntimeConfig CRs USD oc get ctrcfg Example output NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s Example showing multiple containerruntime machine configs USD oc get mc | grep container Example output ... 01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s ... The following example raises the pids_limit to 2048, sets the log_level to debug , sets the overlay size to 8 GB, and sets the log_size_max to unlimited: Example ContainerRuntimeConfig CR apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: "-1" 5 1 Specifies the machine config pool label. 2 Optional: Specifies the maximum number of processes allowed in a container. 3 Optional: Specifies the level of verbosity for log messages. 4 Optional: Specifies the maximum size of a container image. 5 Optional: Specifies the maximum size allowed for the container log file. If set to a positive number, it must be at least 8192. Procedure To change CRI-O settings using the ContainerRuntimeConfig CR: Create a YAML file for the ContainerRuntimeConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: "-1" 1 Specify a label for the machine config pool that you want you want to modify. 2 Set the parameters as needed. Create the ContainerRuntimeConfig CR: USD oc create -f <file_name>.yaml Verify that the CR is created: USD oc get ContainerRuntimeConfig Example output NAME AGE overlay-size 3m19s Check that a new containerruntime machine config is created: USD oc get machineconfigs | grep containerrun Example output 99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s Monitor the machine config pool until all are shown as ready: USD oc get mcp worker Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h Verify that the settings were applied in CRI-O: Open an oc debug session to a node in the machine config pool and run chroot /host . USD oc debug node/<node_name> sh-4.4# chroot /host Verify the changes in the crio.conf file: sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max' Example output pids_limit = 2048 log_size_max = -1 log_level = "debug" Verify the changes in the `storage.conf`file: sh-4.4# head -n 7 /etc/containers/storage.conf Example output 3.3.3. Setting the default maximum container root partition size for Overlay with CRI-O The root partition of each container shows all of the available disk space of the underlying host. Follow this guidance to set a maximum partition size for the root disk of all containers. To configure the maximum Overlay size, as well as other CRI-O options like the log level and PID limit, you can create the following ContainerRuntimeConfig custom resource definition (CRD): apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G Procedure Create the configuration object: USD oc apply -f overlaysize.yml To apply the new CRI-O configuration to your worker nodes, edit the worker machine config pool: USD oc edit machineconfigpool worker Add the custom-crio label based on the matchLabels name you set in the ContainerRuntimeConfig CRD: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2020-07-09T15:46:34Z" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: "" Save the changes, then view the machine configs: USD oc get machineconfigs New 99-worker-generated-containerruntime and rendered-worker-xyz objects are created: Example output 99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s After those objects are created, monitor the machine config pool for the changes to be applied: USD oc get mcp worker The worker nodes show UPDATING as True , as well as the number of machines, the number updated, and other details: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h When complete, the worker nodes transition back to UPDATING as False , and the UPDATEDMACHINECOUNT number matches the MACHINECOUNT : Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h Looking at a worker machine, you see that the new 8 GB max size configuration is applied to all of the workers: Example output head -n 7 /etc/containers/storage.conf [storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [] size = "8G" Looking inside a container, you see that the root partition is now 8 GB: Example output ~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% / | [
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m",
"oc describe mcp worker",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3",
"oc get machineconfigs",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m",
"oc describe machineconfigs 01-master-kubelet",
"Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\",
"oc delete -f ./myconfig.yaml",
"cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF",
"ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK",
"cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK mode: 420 1 overwrite: true path: /etc/chrony.conf osImageURL: \"\" EOF",
"oc apply -f ./99-masters-chrony-configuration.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root",
"oc create -f ./99-master-kargs-mpath.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF",
"oc create -f 99-worker-realtime.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0",
"oc debug node/ip-10-0-143-147.us-east-2.compute.internal",
"Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"oc delete -f 99-worker-realtime.yaml",
"cat > /tmp/jrnl.conf <<EOF Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s EOF",
"export jrnl_cnf=USD( cat /tmp/jrnl.conf | base64 -w0 ) echo USDjrnl_cnf IyBEaXNhYmxlIHJhdGUgbGltaXRpbmcKUmF0ZUxpbWl0SW50ZXJ2YWw9MXMKUmF0ZUxpbWl0QnVyc3Q9MTAwMDAKU3RvcmFnZT12b2xhdGlsZQpDb21wcmVzcz1ubwpNYXhSZXRlbnRpb25TZWM9MzBzCg==",
"cat > /tmp/40-worker-custom-journald.yaml <<EOF apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-worker-custom-journald spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.1.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} verification: {} filesystem: root mode: 420 path: /etc/systemd/journald.conf systemd: {} osImageURL: \"\" EOF",
"oc apply -f /tmp/40-worker-custom-journald.yaml",
"oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit",
"cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF",
"oc create -f 80-extensions.yaml",
"oc get machineconfig 80-worker-extensions",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3",
"oc debug node/ip-10-0-169-2.us-east-2.compute.internal",
"To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4",
"butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>",
"oc apply -f 98-worker-firmware-blob.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc get ctrcfg",
"NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s",
"oc get mc | grep container",
"01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: \"-1\" 5",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: \"-1\"",
"oc create -f <file_name>.yaml",
"oc get ContainerRuntimeConfig",
"NAME AGE overlay-size 3m19s",
"oc get machineconfigs | grep containerrun",
"99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'",
"pids_limit = 2048 log_size_max = -1 log_level = \"debug\"",
"sh-4.4# head -n 7 /etc/containers/storage.conf",
"[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G",
"oc apply -f overlaysize.yml",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"",
"oc get machineconfigs",
"99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h",
"head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/post-install-machine-configuration-tasks |
3.5.7. Computing for Statistical Aggregates | 3.5.7. Computing for Statistical Aggregates Statistical aggregates are used to collect statistics on numerical values where it is important to accumulate new data quickly and in large volume (that is storing only aggregated stream statistics). Statistical aggregates can be used in global variables or as elements in an array. To add value to a statistical aggregate, use the operator <<< value . Example 3.19. stat-aggregates.stp In Example 3.19, "stat-aggregates.stp" , the operator <<< count stores the amount returned by count to the associated value of the corresponding execname() in the reads array. Remember, these values are stored ; they are not added to the associated values of each unique key, nor are they used to replace the current associated values. In a manner of speaking, think of it as having each unique key ( execname() ) having multiple associated values, accumulating with each probe handler run. Note In the context of Example 3.19, "stat-aggregates.stp" , count returns the amount of data written by the returned execname() to the virtual file system. To extract data collected by statistical aggregates, use the syntax format @ extractor ( variable/array index expression ) . extractor can be any of the following integer extractors: count Returns the number of all values stored into the variable/array index expression. Given the sample probe in Example 3.19, "stat-aggregates.stp" , the expression @count(writes[execname()]) will return how many values are stored in each unique key in array writes . sum Returns the sum of all values stored into the variable/array index expression. Again, given sample probe in Example 3.19, "stat-aggregates.stp" , the expression @sum(writes[execname()]) will return the total of all values stored in each unique key in array writes . min Returns the smallest among all the values stored in the variable/array index expression. max Returns the largest among all the values stored in the variable/array index expression. avg Returns the average of all values stored in the variable/array index expression. When using statistical aggregates, you can also build array constructs that use multiple index expressions (to a maximum of 5). This is helpful in capturing additional contextual information during a probe. For example: Example 3.20. Multiple Array Indexes In Example 3.20, "Multiple Array Indexes" , the first probe tracks how many times each process performs a VFS read. What makes this different from earlier examples is that this array associates a performed read to both a process name and its corresponding process ID. The second probe in Example 3.20, "Multiple Array Indexes" demonstrates how to process and print the information collected by the array reads . Note how the foreach statement uses the same number of variables (that is var1 and var2 ) contained in the first instance of the array reads from the first probe. | [
"global reads probe vfs.read { reads[execname()] <<< count }",
"global reads probe vfs.read { reads[execname(),pid()] <<< 1 } probe timer.s(3) { foreach([var1,var2] in reads) printf(\"%s (%d) : %d \\n\", var1, var2, @count(reads[var1,var2])) }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/arrayops-aggregates |
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation | Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 8 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline systems. A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Begin by enabling the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the online machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd on the intended Manager machine: Install the vsftpd package: # dnf install vsftpd Enable ftp access for an anonymous user to have access to rpm files from the intended Manager machine, and to keep it secure, disable write on ftp server. Edit the /etc/vsftpd/vsftpd.conf file and change the values for anonymous_enable and write_enable as follows: anonymous_enable=YES write_enable=NO Start the vsftpd service, and ensure the service starts on boot: # systemctl start vsftpd.service # systemctl enable vsftpd.service Create a firewall rule to allow FTP service and reload the firewalld service to apply changes: # firewall-cmd --permanent --add-service=ftp # firewall-cmd --reload Red Hat Enterprise Linux 8 enforces SELinux by default, so configure SELinux to allow FTP access: # setsebool -P allow_ftpd_full_access=1 Create a sub-directory inside the /var/ftp/pub/ directory, where the downloaded packages are made available: # mkdir /var/ftp/pub/rhvrepo Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: # reposync -p /var/ftp/pub/rhvrepo --download-metadata This command downloads a large number of packages and their metadata, and takes a long time to complete. Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the intended Manager machine. You can create the configuration file manually or with a script. Run the script below on the machine hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the machine hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Configuring the Manager . Packages are installed from the local repository, instead of from the Content Delivery Network. Troubleshooting When running reposync , the following error message appears No available modular metadata for modular package "package_name_from_module" it cannot be installed on the system Solution Ensure you have yum-utils-4.0.8-3.el8.noarch or higher installed so reposync correctly downloads all the packages. For more information, see Create a local repo with Red Hat Enterprise Linux 8 . | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"dnf install vsftpd",
"anonymous_enable=YES write_enable=NO",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"firewall-cmd --permanent --add-service=ftp firewall-cmd --reload",
"setsebool -P allow_ftpd_full_access=1",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -p /var/ftp/pub/rhvrepo --download-metadata",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/configuring_an_offline_repository_for_red_hat_virtualization_manager_installation_sm_localdb_deploy |
Chapter 3. Distributed tracing installation | Chapter 3. Distributed tracing installation 3.1. Installing distributed tracing You can install Red Hat OpenShift distributed tracing on OpenShift Container Platform in either of two ways: You can install Red Hat OpenShift distributed tracing as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install Red Hat OpenShift distributed tracing as part of a service mesh, follow the Red Hat Service Mesh Installation instructions. You must install Red Hat OpenShift distributed tracing in the same namespace as your service mesh, that is, the ServiceMeshControlPlane and the Red Hat OpenShift distributed tracing resources must be in the same namespace. If you do not want to install a service mesh, you can use the Red Hat OpenShift distributed tracing Operators to install distributed tracing by itself. To install Red Hat OpenShift distributed tracing without a service mesh, use the following instructions. 3.1.1. Prerequisites Before you can install Red Hat OpenShift distributed tracing, review the installation activities, and ensure that you meet the prerequisites: Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.10 overview . Install OpenShift Container Platform 4.10. Install OpenShift Container Platform 4.10 on AWS Install OpenShift Container Platform 4.10 on user-provisioned AWS Install OpenShift Container Platform 4.10 on bare metal Install OpenShift Container Platform 4.10 on vSphere Install the version of the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version and add it to your path. An account with the cluster-admin role. 3.1.2. Red Hat OpenShift distributed tracing installation overview The steps for installing Red Hat OpenShift distributed tracing are as follows: Review the documentation and determine your deployment strategy. If your deployment strategy requires persistent storage, install the OpenShift Elasticsearch Operator via the OperatorHub. Install the Red Hat OpenShift distributed tracing platform Operator via the OperatorHub. Modify the custom resource YAML file to support your deployment strategy. Deploy one or more instances of Red Hat OpenShift distributed tracing platform to your OpenShift Container Platform environment. 3.1.3. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing, giving demonstrations, or using Red Hat OpenShift distributed tracing platform in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. 3.1.4. Installing the Red Hat OpenShift distributed tracing platform Operator To install Red Hat OpenShift distributed tracing platform, you use the OperatorHub to install the Red Hat OpenShift distributed tracing platform Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributed tracing platform into the filter to locate the Red Hat OpenShift distributed tracing platform Operator. Click the Red Hat OpenShift distributed tracing platform Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing platform Operator shows a status of "Succeeded" before continuing. 3.1.5. Installing the Red Hat OpenShift distributed tracing data collection Operator Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To install Red Hat OpenShift distributed tracing data collection, you use the OperatorHub to install the Red Hat OpenShift distributed tracing data collection Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributed tracing data collection into the filter to locate the Red Hat OpenShift distributed tracing data collection Operator. Click the Red Hat OpenShift distributed tracing data collection Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, accept the default stable Update channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing data collection Operator shows a status of "Succeeded" before continuing. 3.2. Configuring and deploying distributed tracing The Red Hat OpenShift distributed tracing platform Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform resources. You can either install the default configuration or modify the file to better suit your business requirements. Red Hat OpenShift distributed tracing platform has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a distributed tracing platform instance the Operator uses this configuration file to create the objects necessary for the deployment. Jaeger custom resource file showing deployment strategy apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1 1 The Red Hat OpenShift distributed tracing platform Operator currently supports the following deployment strategies: allInOne (Default) - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage. Note In-memory storage is not persistent, which means that if the distributed tracing platform instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform ( AMQ Streams / Kafka ). Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. Note The streaming deployment strategy is currently unsupported on IBM Z. Note There are two ways to install and use Red Hat OpenShift distributed tracing, as part of a service mesh or as a stand alone component. If you have installed distributed tracing as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the ServiceMeshControlPlane but for completely control you should configure a Jaeger CR and then reference your distributed tracing configuration file in the ServiceMeshControlPlane . 3.2.1. Deploying the distributed tracing default strategy from the web console The custom resource definition (CRD) defines the configuration used when you deploy an instance of Red Hat OpenShift distributed tracing. The default CR is named jaeger-all-in-one-inmemory and it is configured with minimal resources to ensure that you can successfully install it on a default OpenShift Container Platform installation. You can use this default configuration to create a Red Hat OpenShift distributed tracing platform instance that uses the AllInOne deployment strategy, or you can define your own custom resource file. Note In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Details tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, to install using the defaults, click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-all-in-one-inmemory . On the Jaeger Details page, click the Resources tab. Wait until the pod has a status of "Running" before continuing. 3.2.1.1. Deploying the distributed tracing default strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed and verified. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger.yaml that contains the following text: Example jaeger-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s 3.2.2. Deploying the distributed tracing production strategy from the web console The production deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your production YAML configuration, for example: Example jaeger-production.yaml file with Elasticsearch apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *' Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-prod-elasticsearch . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.2.1. Deploying the distributed tracing production strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-production.yaml that contains the text of the example file in the procedure. Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger-production.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s 3.2.3. Deploying the distributed tracing streaming strategy from the web console The streaming deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. The streaming strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. If you do not have an AMQ Streams subscription, contact your sales representative for more information. Note The streaming deployment strategy is currently unsupported on IBM Z. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your streaming YAML configuration, for example: Example jaeger-streaming.yaml file apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-streaming . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.3.1. Deploying the distributed tracing streaming strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-streaming.yaml that contains the text of the example file in the procedure. Run the following command to deploy Jaeger: USD oc create -n tracing-system -f jaeger-streaming.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s 3.2.4. Validating your deployment 3.2.4.1. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing installed, and Red Hat OpenShift distributed tracing platform installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 3.2.5. Customizing your deployment 3.2.5.1. Deployment best practices Red Hat OpenShift distributed tracing instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform instance to each tenant namespace. Agent as a daemonset is not supported for multitenant installations or Red Hat OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases. If you are installing distributed tracing as part of Red Hat OpenShift Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 3.2.5.2. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform resources. You can modify these parameters to customize your distributed tracing platform implementation to your business needs. Jaeger generic YAML example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 3.1. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 3.2.5.3. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 3.2. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 3.3. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . 3.2.5.4. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 3.4. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 3.2.5.5. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 3.5. General storage parameters used by the Red Hat OpenShift distributed tracing platform Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 3.6. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 3.2.5.5.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the Red Hat Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 3.7. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform should use the certificate management feature of the Red Hat Elasticsearch Operator. This feature was added to logging subsystem for Red Hat OpenShift 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true *Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform instance. You can mount the same volumes if you create a distributed tracing platform instance with the same name and namespace. 3.2.5.5.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform Operator or by the Red Hat Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. Note Red Hat does not provide support for your external Elasticsearch instance. You can review the tested integrations matrix on the Customer Portal . The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 3.8. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 3.9. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 3.10. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 3.11. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 3.12. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 3.13. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 3.2.5.6. Managing certificates with Elasticsearch You can create and manage certificates using the Red Hat Elasticsearch Operator. Managing certificates using the Red Hat Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform Operator delegates certificate creation to the Red Hat Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites OpenShift Container Platform 4.7 logging subsystem for Red Hat OpenShift 5.2 The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the Red Hat Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform Operator injects the certificates. 3.2.5.7. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 3.14. Parameters used by the Red Hat OpenShift distributed tracing platform Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 3.15. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 3.2.5.8. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 3.16. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 3.2.6. Injecting sidecars Red Hat OpenShift distributed tracing platform relies on a proxy sidecar within the application's pod to provide the agent. The Red Hat OpenShift distributed tracing platform Operator can inject Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually. 3.2.6.1. Automatically injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can inject Jaeger Agent sidecars into Deployment workloads. To enable automatic injection of sidecars, add the sidecar.jaegertracing.io/inject annotation set to either the string true or to the distributed tracing platform instance name that is returned by running USD oc get jaegers . When you specify true , there should be only a single distributed tracing platform instance for the same namespace as the deployment, otherwise, the Operator cannot determine which distributed tracing platform instance to use. A specific distributed tracing platform instance name on a deployment has a higher precedence than true applied on its namespace. The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single distributed tracing platform instance available in the same namespace: Automatic sidecar injection example apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: "sidecar.jaegertracing.io/inject": "true" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion 1 Set to either the string true or to the Jaeger instance name. When the sidecar is injected, the agent can then be accessed at its default location on localhost . 3.2.6.2. Manually injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can only automatically inject Jaeger Agent sidecars into Deployment workloads. For controller types other than Deployments , such as StatefulSets`and `DaemonSets , you can manually define the Jaeger agent sidecar in your specification. The following snippet shows the manual definition you can include in your containers section for a Jaeger agent sidecar: Sidecar definition example for a StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc The agent can then be accessed at its default location on localhost. 3.3. Configuring and deploying distributed tracing data collection The Red Hat OpenShift distributed tracing data collection Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat OpenShift distributed tracing data collection resources. You can either install the default configuration or modify the file to better suit your business requirements. 3.3.1. OpenTelemetry Collector configuration options Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The OpenTelemetry Collector consists of three components that access telemetry data: Receivers - A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Processors - (Optional) Processors are run on data between being received and being exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, it may be recommended that multiple processors be enabled. In addition, it is important to note that the order of processors matters. Exporters - An exporter, which can be push or pull based, is how you send data to one or more backends/destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters may support one or more data sources. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. You can define multiple instances of components in a custom resource YAML file. Once configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice you should only enable the components that you need. sample OpenTelemetry collector custom resource file apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger] Note If a component is configured, but not defined within the service section then it is not enabled. Table 3.17. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger None The oltp and jaeger receivers come with default settings, specifying the name of the receiver is enough to configure it. Processors run on data between being received and being exported. By default, no processors are enabled. None An exporter sends data to one or more backends/destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. logging , jaeger None The jaeger exporter's endpoint must be of the form <name>-collector-headless.<namespace>.svc , with the name and namespace of the Jaeger deployment, for a secure connection to be established. Path to the CA certificate. For a client this verifies the server certificate. For a server this verifies client certificates. If empty uses system root CA. Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None 3.4. Upgrading distributed tracing Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handles upgrades, see the Operator Lifecycle Manager documentation. During an update, the Red Hat OpenShift distributed tracing Operators upgrade the managed distributed tracing instances to the version associated with the Operator. Whenever a new version of the Red Hat OpenShift distributed tracing platform Operator is installed, all the distributed tracing platform application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running distributed tracing platform instances and upgrades them to 1.11 as well. For specific instructions on how to update the OpenShift Elasticsearch Operator, see Updating OpenShift Logging . 3.4.1. Changing the Operator channel for 2.0 Red Hat OpenShift distributed tracing 2.0.0 made the following changes: Renamed the Red Hat OpenShift Jaeger Operator to the Red Hat OpenShift distributed tracing platform Operator. Stopped support for individual release channels. Going forward, the Red Hat OpenShift distributed tracing platform Operator will only support the stable Operator channel. Maintenance channels, for example 1.24-stable , will no longer be supported by future Operators. As part of the update to version 2.0, you must update your OpenShift Elasticsearch and Red Hat OpenShift distributed tracing platform Operator subscriptions. Prerequisites The OpenShift Container Platform version is 4.6 or later. You have updated the OpenShift Elasticsearch Operator. You have backed up the Jaeger custom resource file. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Important If you have not already updated your OpenShift Elasticsearch Operator as described in Updating OpenShift Logging complete that update before updating your Red Hat OpenShift distributed tracing platform Operator. For instructions on how to update the Operator channel, see Updating installed Operators . 3.5. Removing distributed tracing The steps for removing Red Hat OpenShift distributed tracing from an OpenShift Container Platform cluster are as follows: Shut down any Red Hat OpenShift distributed tracing pods. Remove any Red Hat OpenShift distributed tracing instances. Remove the Red Hat OpenShift distributed tracing platform Operator. Remove the Red Hat OpenShift distributed tracing data collection Operator. 3.5.1. Removing a Red Hat OpenShift distributed tracing platform instance using the web console Note When deleting an instance that uses the in-memory storage, all data is permanently lost. Data stored in a persistent storage such as Elasticsearch is not be deleted when a Red Hat OpenShift distributed tracing platform instance is removed. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the name of the project where the Operators are installed from the Project menu, for example, openshift-operators . Click the Red Hat OpenShift distributed tracing platform Operator. Click the Jaeger tab. Click the Options menu to the instance you want to delete and select Delete Jaeger . In the confirmation message, click Delete . 3.5.2. Removing a Red Hat OpenShift distributed tracing platform instance from the CLI Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> To display the distributed tracing platform instances run the command: USD oc get deployments -n <jaeger-project> For example, USD oc get deployments -n openshift-operators The names of Operators have the suffix -operator . The following example shows two Red Hat OpenShift distributed tracing platform Operators and four distributed tracing platform instances: USD oc get deployments -n openshift-operators You should see output similar to the following: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m To remove an instance of distributed tracing platform, run the following command: USD oc delete jaeger <deployment-name> -n <jaeger-project> For example: USD oc delete jaeger tracing2 -n openshift-operators To verify the deletion, run the oc get deployments command again: USD oc get deployments -n <jaeger-project> For example: USD oc get deployments -n openshift-operators You should see generated output that is similar to the following example: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s 3.5.3. Removing the Red Hat OpenShift distributed tracing Operators Procedure Follow the instructions for Deleting Operators from a cluster . Remove the Red Hat OpenShift distributed tracing platform Operator. After the Red Hat OpenShift distributed tracing platform Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator. | [
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger]",
"receivers:",
"receivers: otlp:",
"processors:",
"exporters:",
"exporters: jaeger: endpoint:",
"exporters: jaeger: tls: ca_file:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/distributed_tracing/distributed-tracing-installation |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.14 Important considerations when deploying Red Hat OpenShift Data Foundation 4.14 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see OpenShift Container Platform - Installation process . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. In order for Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly pre-existing. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file, and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. Cluster wide encryption is supported in OpenShift Data Foundation 4.6 without Key Management System (KMS). Starting with OpenShift Data Foundation 4.7, it supports with and without HashiCorp Vault KMS. Starting with OpenShift Data Foundation 4.12, it supports with and without both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected, then the administrator has to provide the vault token that provides access to the backend path in Vault where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected, then the administrator has to provide the name of the role configured in Vault that provides access to the backend path where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. This method is available from OpenShift Data Foundation 4.10. Currently, HashiCorp Vault is the only supported KMS. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault KV secret engine, API version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment. 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication, either as a source or destination, requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured, the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.14 is supported only on OpenShift Container Platform version 4.14 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both the storage device requirements and have a storage class that provides EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 6.7, Update 2 or later vSphere 7.0 or later. For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner, or VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both the storage device requirements and have a storage class that provides an zzure disk via the azure-disk provisioner. 7.1.5. Google Cloud Platform Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both the storage device requirements and have a storage class that provides a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both the storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both the storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.2.2. IBM FlashSystem To use IBM FlashSystem as a pluggable external storage on other providers, you need to first deploy it before you can deploy OpenShift Data Foundation, which would use the IBM FlashSystem storage class as a backing storage. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation . For instructions on how to deploy OpenShift Data Foundation, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators, or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.7. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.8. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks 8.2.1. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.2. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.3. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports two types of drivers. The following tables describes the drivers and their features: macvlan (recommended) ipvlan Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Each connection gets its own IP address and shares the same MAC address. Uses less CPU and provides better throughput than Linux bridge or ipvlan . L2 mode is analogous to macvlan bridge mode. Almost always require bridge mode. L3 mode is analogous to a router existing on the parent interface. L3 is useful for Border Gateway Protocol (BGP), otherwise use macvlan for reduced CPU and better throughput. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. If NIC does not support VLANs in hardware, performance might be better than macvlan . OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.4. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. Regional-DR : Cross Region protection with minimal potential data loss. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 9.3. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Note Regional-DR is supported with OpenShift Data Foundation 4.14 and Red Hat Advanced Cluster Management for Kubernetes 2.9 combinations only. Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed solution requirements, see Regional-DR requirements and RHACM requirements . Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z External mode Deploying OpenShift Data Foundation in external mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html-single/planning_your_deployment/index |
Interactively installing RHEL from installation media | Interactively installing RHEL from installation media Red Hat Enterprise Linux 9 Installing RHEL on a local system using the graphical installer Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/index |
Chapter 15. Starting a remote installation by using VNC | Chapter 15. Starting a remote installation by using VNC 15.1. Performing a remote RHEL installation in VNC Direct mode Use this procedure to perform a remote RHEL installation in VNC Direct mode. Direct mode expects the VNC viewer to initiate a connection to the target system that is being installed with RHEL. In this procedure, the system with the VNC viewer is called the remote system. You are prompted by the RHEL installation program to initiate the connection from the VNC viewer on the remote system to the target system. Note This procedure uses TigerVNC as the VNC viewer. Specific instructions for other viewers might differ, but the general principles apply. Prerequisites You have installed a VNC viewer on a remote system as a root user. You have set up a network boot server and booted the installation on the target system. Procedure From the RHEL boot menu on the target system, press the Tab key on your keyboard to edit the boot options. Append the inst.vnc option to the end of the command line. If you want to restrict VNC access to the system that is being installed, add the inst.vncpassword=PASSWORD boot option to the end of the command line. Replace PASSWORD with the password you want to use for the installation. The VNC password must be between 6 and 8 characters long. This is a temporary password for the inst.vncpassword= option and it should not be an existing or root password. Press Enter to start the installation. The target system initializes the installation program and starts the necessary services. When the system is ready, a message is displayed providing the IP address and port number of the system. Open the VNC viewer on the remote system. Enter the IP address and the port number into the VNC server field. Click Connect . Enter the VNC password and click OK . A new window opens with the VNC connection established, displaying the RHEL installation menu. From this window, you can install RHEL on the target system using the graphical user interface. 15.2. Performing a remote RHEL installation in VNC Connect mode Use this procedure to perform a remote RHEL installation in VNC Connect mode. In Connect mode, the target system that is being installed with RHEL initiates a connect to the VNC viewer that is installed on another system. In this procedure, the system with the VNC viewer is called the remote system. Note This procedure uses TigerVNC as the VNC viewer. Specific instructions for other viewers might differ, but the general principles apply. Prerequisites You have installed a VNC viewer on a remote system as a root user. You have set up a network boot server to start the installation on the target system. You have configured the target system to use the boot options for a VNC Connect installation. You have verified that the remote system with the VNC viewer is configured to accept an incoming connection on the required port. Verification is dependent on your network and system configuration. For more information, see Security hardening and Securing networks . Procedure Start the VNC viewer on the remote system in listening mode by running the following command: Replace PORT with the port number used for the connection. The terminal displays a message indicating that it is waiting for an incoming connection from the target system. Boot the target system from the network. From the RHEL boot menu on the target system, press the Tab key on your keyboard to edit the boot options. Append the inst.vnc inst.vncconnect=HOST:PORT option to the end of the command line. Replace HOST with the IP address of the remote system that is running the listening VNC viewer, and PORT with the port number that the VNC viewer is listening on. Press Enter to start the installation. The system initializes the installation program and starts the necessary services. When the initialization process is finished, the installation program attempts to connect to the IP address and port provided. When the connection is successful, a new window opens with the VNC connection established, displaying the RHEL installation menu. From this window, you can install RHEL on the target system using the graphical user interface. | [
"vncviewer -listen PORT",
"TigerVNC Viewer 64-bit v1.8.0 Built on: 2017-10-12 09:20 Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt) See http://www.tigervnc.org for information about TigerVNC. Thu Jun 27 11:30:57 2019 main: Listening on port 5500"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/starting-a-remote-installation-by-using-vnc_rhel-installer |
5.2. Using KVM virtio Drivers for New Storage Devices | 5.2. Using KVM virtio Drivers for New Storage Devices This procedure covers creating new storage devices using the KVM virtio drivers with virt-manager . Alternatively, the virsh attach-disk or virsh attach-interface commands can be used to attach devices using the virtio drivers. Important Ensure the drivers have been installed on the guest before proceeding to install new devices. If the drivers are unavailable the device will not be recognized and will not work. Procedure 5.2. Adding a storage device using the virtio storage driver Open the guest virtual machine by double clicking the name of the guest in virt-manager . Open the Show virtual hardware details tab by clicking . In the Show virtual hardware details tab, click the Add Hardware button. Select hardware type Select Storage as the Hardware type . Figure 5.1. The Add new virtual hardware wizard Select the storage device and driver Create a new disk image or select a storage pool volume. Set the Device type to Disk device and the Bus type to VirtIO to use the virtio drivers. Figure 5.2. The Add New Virtual Hardware wizard Click Finish to complete the procedure. Procedure 5.3. Adding a network device using the virtio network driver Open the guest virtual machine by double clicking the name of the guest in virt-manager . Open the Show virtual hardware details tab by clicking . In the Show virtual hardware details tab, click the Add Hardware button. Select hardware type Select Network as the Hardware type . Figure 5.3. The Add new virtual hardware wizard Select the network device and driver Set the Device model to virtio to use the virtio drivers. Choose the required Host device . Figure 5.4. The Add new virtual hardware wizard Click Finish to complete the procedure. Once all new devices are added, reboot the virtual machine. Virtual machines may not recognize the devices until the guest is rebooted. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_para_virtualized_virtio_drivers-using_kvm_virtio_drivers_for_new_devices |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/getting_started_with_streams_for_apache_kafka_on_openshift/preface |
Chapter 7. Installing a cluster on Azure into an existing VNet | Chapter 7. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.16, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 7.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 7.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 7.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 7.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 7.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 7.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 7.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 7.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 7.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 7.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 7.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 7.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 7.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 1 10 14 20 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 7.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 7.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-azure-vnet |
13.2.6. Configuring Services: PAM | 13.2.6. Configuring Services: PAM Warning A mistake in the PAM configuration file can lock users out of the system completely. Always back up the configuration files before performing any changes, and keep a session open so that any changes can be reverted. SSSD provides a PAM module, sssd_pam , which instructs the system to use SSSD to retrieve user information. The PAM configuration must include a reference to the SSSD module, and then the SSSD configuration sets how SSSD interacts with PAM. Procedure 13.3. Configuring PAM Use authconfig to enable SSSD for system authentication. This automatically updates the PAM configuration to reference all of the SSSD modules: These modules can be set to include statements, as necessary. Open the sssd.conf file. Make sure that PAM is listed as one of the services that works with SSSD. In the [pam] section, change any of the PAM parameters. These are listed in Table 13.3, "SSSD [pam] Configuration Parameters" . Restart SSSD. Table 13.3. SSSD [pam] Configuration Parameters Parameter Value Format Description offline_credentials_expiration integer Sets how long, in days, to allow cached logins if the authentication provider is offline. This value is measured from the last successful online login. If not specified, this defaults to zero ( 0 ), which is unlimited. offline_failed_login_attempts integer Sets how many failed login attempts are allowed if the authentication provider is offline. If not specified, this defaults to zero ( 0 ), which is unlimited. offline_failed_login_delay integer Sets how long to prevent login attempts if a user hits the failed login attempt limit. If set to zero ( 0 ), the user cannot authenticate while the provider is offline once he hits the failed attempt limit. Only a successful online authentication can re-enable offline authentication. If not specified, this defaults to five ( 5 ). | [
"authconfig --update --enablesssd --enablesssdauth",
"#%PAM-1.0 This file is auto-generated. User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_sss.so use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_sss.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_sss.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session sufficient pam_sss.so session required pam_unix.so",
"vim /etc/sssd/sssd.conf",
"[sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss, pam",
"[pam] reconnection_retries = 3 offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5",
"~]# service sssd restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/configuration_options-pam_configuration_options |
7.6. authconfig | 7.6. authconfig 7.6.1. RHBA-2013:0486 - authconfig bug fix update Updated authconfig packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The authconfig packages provide a command line utility and a GUI application that can configure a workstation to be a client for certain network user information and authentication schemes, and other user information and authentication related options. Bug Fixes BZ# 862195 Prior to this update, the authconfig utility used old syntax for configuring the idmap mapping in the smb.conf file when started with the "--smbidmapuid" and "--smbidmapgid" command line options. Consequently, Samba 3.6 ignored the configuration. This update adapts authconfig to use the new syntax of the idmap range configuration so that Samba 3.6 can read it. BZ# 874527 Prior to this update, the authconfig utility could write an incomplete sssd.conf file when using the options "--enablesssd" or "--enablesssdauth". As a consequence, the sssd daemon did not start. With this update, authconfig no longer tries to create the sssd.conf file without complete information, and the sssd daemon can now start as expected. All users of authconfig are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/authconfig |
Chapter 3. Bug fixes | Chapter 3. Bug fixes 3.1. Extension 'ms-python.python' CANNOT use API proposal: terminalShellIntegration Before this release, installing the latest Python extension (v2024.14.0) would fail with the following error message: "Extension 'ms-python.python' CANNOT use API proposal: terminalShellIntegration". With this release, the issue is fixed Additional resources CRW-7201 3.2. Opening links not possible in the Visual Studio Code - Open Source ("Code - OSS") Before this release, it was not possible to open links in Visual Studio Code - Open Source ("Code - OSS"). With this release, the issue has been fixed. Additional resources CRW-7247 | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.1_release_notes_and_known_issues/bug-fixes |
1.3. Fencing Configuration | 1.3. Fencing Configuration You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see the Red Hat Enterprise Linux 7 High Availability Add-On Reference . For general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster . Note When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses. This example uses the APC power switch with a host name of zapc.example.com to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map and pcmk_host_list options. You create a fencing device by configuring the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1.example.com and z2.example.com . The pcmk_host_map option maps z1.example.com to port 1, and z2.example.com to port 2. The login value and password for the APC device are both apc . By default, this device will use a monitor interval of sixty seconds for each node. Note that you can use an IP address when specifying the host name for the nodes. Note When you create a fence_apc_snmp stonith device, you may see the following warning message, which you can safely ignore: The following command displays the parameters of an existing STONITH device. After configuring your fence device, you should test the device. For information on testing a fence device, see Fencing: Configuring Stonith in the High Availability Add-On Reference . Note Do not test your fence device by disabling the network interface, as this will not properly test fencing. Note Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node. | [
"pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" pcmk_host_check=\"static-list\" pcmk_host_list=\"z1.example.com,z2.example.com\" login=\"apc\" passwd=\"apc\"",
"Warning: missing required option(s): 'port, action' for resource type: stonith:fence_apc_snmp",
"pcs stonith show myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 pcmk_host_check=static-list pcmk_host_list=z1.example.com,z2.example.com login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-fenceconfig-haaa |
Chapter 5. Important changes to external kernel parameters | Chapter 5. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel distributed with Red Hat Enterprise Linux 9.5. These changes could include, for example, added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters numa_cma=<node>:nn[MG][,<node>:nn[MG]] [KNL,CMA] Sets the size of kernel numa memory area for contiguous memory allocations. It will reserve CMA area for the specified node. With numa CMA enabled, DMA users on node nid will first try to allocate buffer from the numa area which is located in node nid, if the allocation fails, they will fallback to the global default memory area. reg_file_data_sampling= [x86] Controls mitigation for Register File Data Sampling (RFDS) vulnerability. RFDS is a CPU vulnerability which might allow userspace to infer kernel data values previously stored in floating point registers, vector registers, or integer registers. RFDS only affects Intel Atom processors. Values: on :: Turns ON the mitigation off :: Turns OFF the mitigation This parameter overrides the compile time default set by CONFIG_MITIGATION_RFDS. Mitigation cannot be disabled when other VERW based mitigations (such as MDS) are enabled. To disable RFDS mitigation all VERW based mitigations need to be disabled. For details see: Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst locktorture.acq_writer_lim= [KNL] Set the time limit in jiffies for a lock acquisition. Acquisitions exceeding this limit will result in a splat once they do complete. locktorture.bind_readers= [KNL] Specify the list of CPUs to which the readers are to be bound. locktorture.bind_writers= [KNL] Specify the list of CPUs to which the writers are to be bound. locktorture.call_rcu_chains= [KNL] Specify the number of self-propagating call_rcu() chains to set up. These are used to ensure that there is a high probability of an RCU grace period in progress at any given time. Defaults to 0, which disables these call_rcu() chains. locktorture.long_hold= [KNL] Specify the duration in milliseconds for the occasional long-duration lock hold time. Defaults to 100 milliseconds. Select 0 to disable. locktorture.nested_locks= [KNL] Specify the maximum lock nesting depth that locktorture is to exercise, up to a limit of 8 (MAX_NESTED_LOCKS). Specify zero to disable. Note that this parameter is ineffective on types of locks that do not support nested acquisition. workqueue.default_affinity_scope= Select the default affinity scope to use for unbound work queues. Can be one of "cpu", "smt", "cache", "numa" and "system". Default is "cache". For more information, see the Affinity Scopes section in Documentation/core-api/workqueue.rst . This can be changed after boot by writing to the matching /sys/module/workqueue/parameters file. All work queues with the "default" affinity scope will be updated accordingly. locktorture.rt_boost= [KNL] Do periodic testing of real-time lock priority boosting. Select 0 to disable, 1 to boost only rt_mutex, and 2 to boost unconditionally. Defaults to 2 , which might seem to be an odd choice, but which should be harmless for non-real-time spinlocks, due to their disabling of preemption. Note that non-realtime mutexes disable boosting. locktorture.writer_fifo= [KNL] Run the write-side locktorture kthreads at sched_set_fifo() real-time priority. locktorture.rt_boost_factor= [KNL] Number that determines how often and for how long priority boosting is exercised. This is scaled down by the number of writers, so that the number of boosts per unit time remains roughly constant as the number of writers increases. However, the duration of each boost increases with the number of writers. microcode.force_minrev= [X86] Format: <bool> Enable or disable the microcode minimal revision enforcement for the runtime microcode loader. module.async_probe=<bool> [KNL] When set to true, modules will use async probing by default. To enable or disable async probing for a specific module, use the module specific control that is documented under <module>.async_probe . When both module.async_probe and <module>.async_probe are specified, <module>.async_probe takes precedence for the specific module. module.enable_dups_trace [KNL] When CONFIG_MODULE_DEBUG_AUTOLOAD_DUPS is set, this means that duplicate request_module() calls will trigger a WARN_ON() instead of a pr_warn() . Note that if MODULE_DEBUG_AUTOLOAD_DUPS_TRACE is set, WARN_ON() will always be issued and this option does nothing. nfs.delay_retrans= [NFS] Specifies the number of times the NFSv4 client retries the request before returning an EAGAIN error, after a reply of NFS4ERR_DELAY from the server. Only applies if the softerr mount option is enabled, and the specified value is >= 0 . rcutree.do_rcu_barrier= [KNL] Request a call to rcu_barrier() . This is throttled so that userspace tests can safely hammer on the sysfs variable if they so choose. If triggered before the RCU grace-period machinery is fully active, this will error out with EAGAIN. rcuscale.minruntime= [KNL] Set the minimum test run time in seconds. This does not affect the data-collection interval, but instead allows better measurement of things such as CPU consumption. rcuscale.writer_holdoff_jiffies= [KNL] Additional write-side holdoff between grace periods, but in jiffies. The default of zero says no holdoff. rcupdate.rcu_cpu_stall_notifiers= [KNL] Provide RCU CPU stall notifiers, but see the warnings in the RCU_CPU_STALL_NOTIFIER Kconfig option's help text. TL;DR: You almost certainly do not want rcupdate.rcu_cpu_stall_notifiers. rcupdate.rcu_task_lazy_lim= [KNL] Number of callbacks on a given CPU that will cancel laziness on that CPU. Use -1 to disable cancellation of laziness, but be advised that doing so increases the danger of OOM due to callback flooding. rcupdate.rcu_tasks_lazy_ms= ++ [KNL] Set timeout in milliseconds RCU Tasks asynchronous callback batching for call_rcu_tasks() . A negative value will take the default. A value of zero will disable batching. Batching is always disabled for synchronize_rcu_tasks() . rcupdate.rcu_tasks_rude_lazy_ms= [KNL] Set timeout in milliseconds RCU Tasks Rude asynchronous callback batching for call_rcu_tasks_rude() . A negative value will take the default. A value of zero will disable batching. Batching is always disabled for synchronize_rcu_tasks_rude() . rcupdate.rcu_tasks_trace_lazy_ms= [KNL] Set timeout in milliseconds RCU Tasks Trace asynchronous callback batching for call_rcu_tasks_trace(). A negative value will take the default. A value of zero will disable batching. Batching is always disabled for synchronize_rcu_tasks_trace(). spectre_bhi= [X86] Control mitigation of Branch History Injection (BHI) vulnerability. This setting affects the deployment of the HW BHI control and the SW BHB clearing sequence. Values: on (default) Enable the HW or SW mitigation as needed. off Disable the mitigation. unwind_debug [X86-64] Enable unwinder debug output. This can be useful for debugging certain unwinder error conditions, including corrupt stacks and bad or missing unwinder metadata. workqueue.cpu_intensive_thresh_us= Per-cpu work items which run for longer than this threshold are automatically considered CPU intensive and excluded from concurrency management to prevent them from noticeably delaying other per-cpu work items. Default is 10000 (10ms). If CONFIG_WQ_CPU_INTENSIVE_REPORT is set, the kernel will report the work functions which violate this threshold repeatedly. They are likely good candidates for using WQ_UNBOUND work queues instead. workqueue.cpu_intensive_warning_thresh=<uint> If CONFIG_WQ_CPU_INTENSIVE_REPORT is set, the kernel will report the work functions which violate the intensive_threshold_us repeatedly. To prevent spurious warnings, start printing only after a work function has violated this threshold number of times. The default is 4 times. 0 disables the warning. workqueue.default_affinity_scope= Select the default affinity scope to use for unbound work queues. Can be one of "cpu", "smt", "cache", "numa" and "system". Default is "cache". For more information, see the Affinity Scopes section in Documentation/core-api/workqueue.rst. This can be changed after boot by writing to the matching /sys/module/workqueue/parameters file. All work queues with the "default" affinity scope will be updated accordingly. xen_msr_safe= [X86,XEN] Format: <bool> Select whether to always use non-faulting (safe) MSR access functions when running as Xen PV guest. The default value is controlled by CONFIG_XEN_PV_MSR_SAFE . Updated kernel parameters clearcpuid= X[,X...] [X86] Disable CPUID feature X for the kernel. See numbers X. Note the Linux-specific bits are not necessarily stable over kernel options, but the vendor-specific ones should be. X can also be a string as appearing in the flags: line in /proc/cpuinfo which does not have the above instability issue. However, not all features have names in /proc/cpuinfo . Note that using this option will taint your kernel. Also note that user programs calling CPUID directly or using the feature without checking anything will still see it. This just prevents it from being used by the kernel or shown in /proc/cpuinfo . Also note the kernel might malfunction if you disable some critical bits. cma_pernuma=nn[MG] [KNL,CMA] Sets the size of kernel per-numa memory area for contiguous memory allocations. A value of 0 disables per-numa CMA altogether. And If this option is not specified, the default value is 0 . With per-numa CMA enabled, DMA users on node nid will first try to allocate buffer from the pernuma area which is located in node nid, if the allocation fails, they will fallback to the global default memory area. csdlock_debug= [KNL] Enable debug add-ons of cross-CPU function call handling. When switched on, additional debug data is printed to the console in case a hanging CPU is detected, and that CPU is pinged again to try to resolve the hang situation. The default value of this option depends on the CSD_LOCK_WAIT_DEBUG_DEFAULT Kconfig option. <module>.async_probe[=<bool>] [KNL] If no <bool> value is specified or if the value specified is not a valid <bool> , enable asynchronous probe on this module. Otherwise, enable or disable asynchronous probe on this module as indicated by the <bool> value. See also: module.async_probe earlycon= [KNL] Output early console device and options. When used with no options, the early console is determined by stdout-path property in device tree's chosen node or the ACPI SPCR table if supported by the platform. cdns,<addr>[,options] Start an early, polled-mode console on a Cadence (xuartps) serial port at the specified address. Only supported option is baud rate. If baud rate is not specified, the serial port must already be setup and configured. uart[8250],io,<addr>[,options[,uartclk]] uart[8250],mmio,<addr>[,options[,uartclk]] uart[8250],mmio32,<addr>[,options[,uartclk]] uart[8250],mmio32be,<addr>[,options[,uartclk]] uart[8250],0x<addr>[,options] Start an early, polled-mode console on the 8250/16550 UART at the specified I/O port or MMIO address. MMIO inter-register address stride is either 8-bit (mmio) or 32-bit (mmio32 or mmio32be). If none of [io|mmio|mmio32|mmio32be], <addr> is assumed to be equivalent to 'mmio'. 'options' are specified in the same format described for "console=ttyS<n>"; if unspecified, the h/w is not initialized. 'uartclk' is the uart clock frequency; if unspecified, it is set to 'BASE_BAUD' * 16. earlyprintk= [X86,SH,ARM,M68k,S390] earlyprintk=vga earlyprintk=sclp earlyprintk=xen earlyprintk=serial[,ttySn[,baudrate]] earlyprintk=serial[,0x... [,baudrate]] earlyprintk=ttySn[,baudrate] earlyprintk=dbgp[debugController#] earlyprintk=pciserial[,force],bus:device.function[,baudrate] earlyprintk=xdbc[xhciController#] earlyprintk is useful when the kernel crashes before the normal console is initialized. It is not enabled by default because it has some cosmetic problems. Append ",keep" to not disable it when the real console takes over. Only one of vga, efi, serial, or USB debug port can be used at a time. Currently only ttyS0 and ttyS1 might be specified by name. Other I/O ports might be explicitly specified on some architectures (x86 and arm at least) by replacing ttySn with an I/O port address, such as: earlyprintk=serial,0x1008,115200 You can find the port for a given device in /proc/tty/driver/serial: 2: uart:ST16650V2 port:00001008 irq:18. Interaction with the standard serial driver is not very good. The VGA and EFI output is eventually overwritten by the real console. The xen option can only be used in Xen domains. The sclp output can only be used on s390. The optional "force" to "pciserial" enables use of a PCI device even when its classcode is not of the UART class. iommu.strict= [ARM64, X86, S390] Configure TLB invalidation behaviour. Format: { "0" | "1" } 0 - Lazy mode Request that DMA unmap operations use deferred invalidation of hardware TLBs, for increased throughput at the cost of reduced device isolation. Will fall back to strict mode if not supported by the relevant IOMMU driver. 1 - Strict mode DMA unmap operations invalidate IOMMU hardware TLBs synchronously. unset Use value of CONFIG_IOMMU_DEFAULT_DMA_{LAZY,STRICT}. Note On x86, strict mode specified via one of the legacy driver-specific options takes precedence. mem_encrypt= [x86_64] AMD Secure Memory Encryption (SME) control. Valid arguments: on, off Default: off mem_encrypt=on: Activate SME mem_encrypt=off: Do not activate SME mitigations= [X86,PPC,S390,ARM64] Control optional mitigations for CPU vulnerabilities. This is a set of curated, arch-independent options, each of which is an aggregation of existing arch-specific options. Value: off Disable all optional CPU mitigations. This improves system performance, but it might also expose users to several CPU vulnerabilities. Equivalent to: if nokaslr then kpti=0 [ARM64] gather_data_sampling=off [X86] kvm.nx_huge_pages=off [X86] l1tf=off [X86] mds=off [X86] mmio_stale_data=off [X86] no_entry_flush [PPC] no_uaccess_flush [PPC] nobp=0 [S390] nopti [X86,PPC] nospectre_bhb [ARM64] nospectre_v1 [X86,PPC] nospectre_v2 [X86,PPC,S390,ARM64] reg_file_data_sampling=off [X86] retbleed=off [X86] spec_rstack_overflow=off [X86] spec_store_bypass_disable=off [X86,PPC] spectre_bhi=off [X86] spectre_v2_user=off [X86] srbds=off [X86,INTEL] ssbd=force-off [ARM64] tsx_async_abort=off [X86] nosmap [PPC] Disable SMAP (Supervisor Mode Access Prevention) even if it is supported by processor. nosmep [PPC64s] Disable SMEP (Supervisor Mode Execution Prevention) even if it is supported by processor. nox2apic [x86_64,APIC] Do not enable x2APIC mode. Note This parameter will be ignored on systems with the LEGACY_XAPIC_DISABLED bit set in the IA32_XAPIC_DISABLE_STATUS MSR . panic_print= Bitmask for printing system info when panic happens. User can chose combination of the following bits: bit 0: print all tasks info bit 1: print system memory info bit 2: print timer info bit 3: print locks info if CONFIG_LOCKDEP is on bit 4: print ftrace buffer bit 5: print all printk messages in buffer bit 6: print all CPUs backtrace (if available in the arch) Important This option might print a lot of lines, so there are risks of losing older messages in the log. Use this option carefully, you might consider setting up a bigger log buffer with "log_buf_len" along with this. pcie_aspm= [PCIE] Forcibly enable or ignore PCIe Active State Power Management. Value: off Do not touch ASPM configuration at all. Leave any configuration done by firmware unchanged. force Enable ASPM even on devices that claim not to support it. Warning Forcing ASPM on might cause system lockups. s390_iommu= [HW,S390] Set s390 IOTLB flushing mode. Value: strict with strict flushing every unmap operation will result in an IOTLB flush. Default is lazy flushing before reuse, which is faster. Deprecated equivalent to iommu.strict=1. spectre_v2= [x86] Control mitigation of Spectre variant 2 (indirect branch speculation) vulnerability. The default operation protects the kernel from user space attacks. Value: on unconditionally enable, implies spectre_v2_user=on. off unconditionally disable, implies spectre_v2_user=off. auto kernel detects whether your CPU model is vulnerable. Selecting 'on' will, and 'auto' might, choose a mitigation method at run time according to the CPU, the available microcode, the setting of the CONFIG_MITIGATION_RETPOLINE configuration option, and the compiler with which the kernel was built. usbcore.quirks= [USB] A list of quirk entries to augment the built-in USB core quirk list. List entries are separated by commas. Each entry has the form VendorID:ProductID:Flags . The IDs are 4-digit hex numbers and Flags is a set of letters. Each letter will change the built-in quirk; setting it if it is clear and clearing it if it is set. The letters have the following meanings: a = USB_QUIRK_STRING_FETCH_255 (string descriptors must not be fetched by using a 255-byte read); b = USB_QUIRK_RESET_RESUME (device cannot resume correctly so reset it instead); c = USB_QUIRK_NO_SET_INTF (device cannot handle Set-Interface requests); d = USB_QUIRK_CONFIG_INTF_STRINGS (device cannot handle its Configuration or Interface strings); e = USB_QUIRK_RESET (device cannot be reset (e.g morph devices), do not use reset); f = USB_QUIRK_HONOR_BNUMINTERFACES (device has more interface descriptions than the bNumInterfaces count, and cannot handle talking to these interfaces); g = USB_QUIRK_DELAY_INIT (device needs a pause during initialization, after we read the device descriptor); h = USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL (For high speed and super speed interrupt endpoints, the USB 2.0 and USB 3.0 spec require the interval in microframes (1 microframe = 125 microseconds) to be calculated as interval = 2 ^ (bInterval-1). Devices with this quirk report their bInterval as the result of this calculation instead of the exponent variable used in the calculation); i = USB_QUIRK_DEVICE_QUALIFIER (device cannot handle device_qualifier descriptor requests); j = USB_QUIRK_IGNORE_REMOTE_WAKEUP (device generates spurious wakeup, ignore remote wakeup capability); k = USB_QUIRK_NO_LPM (device cannot handle Link Power Management); l = USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL (Device reports its bInterval as linear frames instead of the USB 2.0 calculation); m = USB_QUIRK_DISCONNECT_SUSPEND (Device needs to be disconnected before suspend to prevent spurious wakeup); n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a pause after every control message); o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra delay after resetting its port); p = USB_QUIRK_SHORT_SET_ADDRESS_REQ_TIMEOUT (Reduce timeout of the SET_ADDRESS request from 5000 ms to 500 ms); Example: quirks=0781:5580:bk,0a5c:5834:gij Removed kernel parameters [BUGS=X86] noclflush :: Do not use the CLFLUSH instruction. Workqueue.disable_numa [X86] noexec [BUGS=X86-32] nosep :: Disables x86 SYSENTER/SYSEXIT support. [X86] nordrand :: Disable kernel use of the RDRAND. thermal.nocrt New sysctl parameters oops_limit Number of kernel oopses after which the kernel should panic when panic_on_oops is not set. Setting this to 0 disables checking the count. Setting this to 1 has the same effect as setting panic_on_oops=1 . The default value is 10000 . warn_limit Number of kernel warnings after which the kernel should panic when panic_on_warn is not set. Setting this to 0 disables checking the warning count. Setting this to 1 has the same effect as setting panic_on_warn=1 . The default value is 0 . kexec_load_limit_panic This parameter specifies a limit to the number of times the syscalls kexec_load and kexec_file_load can be called with a crash image. It can only be set with a more restrictive value than the current one. Value: -1 Unlimited calls to kexec. This is the default setting. N Number of calls left. kexec_load_limit_reboot Similar functionality as kexec_load_limit_panic , but for a normal image. numa_balancing_promote_rate_limit_MBps Too high promotion or demotion throughput between different memory types might hurt application latency. You can use this parameter to rate-limit the promotion throughput. The per-node maximum promotion throughput in MB/s is limited to be no more than the set value. Set this parameter to less than 1/10 of the PMEM node write bandwidth. Updated sysctl parameters kexec_load_disabled A toggle indicating if the syscalls kexec_load and kexec_file_load have been disabled. This value defaults to 0 (false: kexec_*load enabled), but can be set to 1 (true: kexec_*load disabled). Once true, kexec can no longer be used, and the toggle cannot be set back to false . This allows a kexec image to be loaded before disabling the syscall allowing a system to set up (and later use) an image without it being altered. Generally used together with the `modules_disabled`_ sysctl. panic_print Bitmask for printing system info when panic happens. User can chose combination of the following bits: bit 0 print all tasks info bit 1 print system memory info bit 2 print timer info bit 3 print locks info if CONFIG_LOCKDEP is on bit 4 print ftrace buffer bit 5 print all printk messages in buffer bit 6 print all CPUs backtrace (if available in the arch) sched_energy_aware Enables or disables Energy Aware Scheduling (EAS). EAS starts automatically on platforms where it can run (that is, platforms with asymmetric CPU topologies and having an Energy Model available). If your platform happens to meet the requirements for EAS but you do not want to use it, change this value to 0 . On Non-EAS platforms, write operation fails and read doesn't return anything. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/kernel_parameters_changes |
Chapter 3. Node Feature Discovery Operator | Chapter 3. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 3.1. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 3.1.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file. Set cluster-monitoring to "true" . apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 3.1.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 3.2. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options. As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI ( oc ) or the web console. 3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check that the NodeFeatureDiscovery CR was created by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. You have access to a mirror registry with the required images. You installed the skopeo CLI tool. Procedure Determine the digest of the registry image: Run the following command: USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version> Example command USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 Inspect the output to identify the image digest: Example output { ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... } Use the skopeo CLI tool to copy the image from registry.redhat.io to your mirror registry, by running the following command: skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> Example command skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check the status of the NodeFeatureDiscovery CR by running the following command: USD oc get nodefeaturediscovery nfd-instance -o yaml Check that the pods are running without ImagePullBackOff errors by running the following command: USD oc get pods -n <nfd_namespace> 3.2.3. Creating a NodeFeatureDiscovery CR by using the web console As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Navigate to the Operators Installed Operators page. In the Node Feature Discovery section, under Provided APIs , click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . 3.3. Configuring the Node Feature Discovery Operator 3.3.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 3.3.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 3.4. About the NodeFeatureRule custom resource NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices. NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features. 3.5. Using the NodeFeatureRule custom resource Create a NodeFeatureRule object to label nodes if a set of rules match the conditions. Procedure Create a custom resource file named nodefeaturerule.yaml that contains the following text: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: "example rule" labels: "example-custom-feature": "true" # Label is created if all of the rules below match matchFeatures: # Match if "veth" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: ["8086"]} This custom resource specifies that labelling occurs when the veth module is loaded and any PCI device with vendor code 8086 exists in the cluster. Apply the nodefeaturerule.yaml file to your cluster by running the following command: USD oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml The example applies the feature label on nodes with the veth module loaded and any PCI device with vendor code 8086 exists. Note A relabeling delay of up to 1 minute might occur. 3.6. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 3.6.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 3.6.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 3.6.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator |
Chapter 1. Validating an installation | Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.30.3 control-plane-1.example.com Ready master 41m v1.30.3 control-plane-2.example.com Ready master 45m v1.30.3 compute-2.example.com Ready worker 38m v1.30.3 compute-3.example.com Ready worker 33m v1.30.3 control-plane-3.example.com Ready master 41m v1.30.3 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Cluster List list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster . | [
"cat <install_dir>/.openshift_install.log",
"time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"",
"oc adm node-logs <node_name> -u crio",
"Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"",
"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4",
"oc get clusteroperators.config.openshift.io",
"oc describe clusterversion",
"oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'",
"{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}",
"oc adm upgrade",
"Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get secrets -n kube-system <secret_name>",
"Error from server (NotFound): secrets \"aws-creds\" not found",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'",
"oc get pods -n openshift-cloud-credential-operator",
"NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.30.3 control-plane-1.example.com Ready master 41m v1.30.3 control-plane-2.example.com Ready master 45m v1.30.3 compute-2.example.com Ready worker 38m v1.30.3 compute-3.example.com Ready worker 33m v1.30.3 control-plane-3.example.com Ready master 41m v1.30.3",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/validation_and_troubleshooting/validating-an-installation |
Chapter 4. Connecting VM instances to physical networks | Chapter 4. Connecting VM instances to physical networks You can directly connect your VM instances to an external network using flat and VLAN provider networks. 4.1. Overview of the OpenStack Networking topology OpenStack Networking (neutron) has two categories of services distributed across a number of node types. Neutron server - This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and loadbalancer details, among others. Neutron agents - These are the services that perform the network functions for OpenStack Networking: neutron-dhcp-agent - manages DHCP IP addressing for project private networks. neutron-l3-agent - performs layer 3 routing between project private networks, the external network, and others. Compute node - This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances. This node is typically where the l2 agents run, such as neutron-openvswitch-agent . Additional resources Section 4.2, "Placement of OpenStack Networking services" 4.2. Placement of OpenStack Networking services The OpenStack Networking services can either run together on the same physical server, or on separate dedicated servers, which are named according to their roles: Controller node - The server that runs API service. Network node - The server that runs the OpenStack Networking agents. Compute node - The hypervisor server that hosts the instances. The steps in this chapter apply to an environment that contains these three node types. If your deployment has both the Controller and Network node roles on the same physical node, then you must perform the steps from both sections on that server. This also applies for a High Availability (HA) environment, where all three nodes might be running the Controller node and Network node services with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes on all three nodes. Additional resources Section 4.1, "Overview of the OpenStack Networking topology" 4.3. Configuring flat provider networks You can use flat provider networks to connect instances directly to the external network. This is useful if you have multiple physical networks and separate physical interfaces, and intend to connect each Compute and Network node to those external networks. Prerequisites You have multiple physical networks. This example uses physical networks called physnet1 , and physnet2 , respectively. You have separate physical interfaces. This example uses separate physical interfaces, eth0 and eth1 , respectively. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your orchestration templates. In the YAML environment file under parameter_defaults , use the NeutronBridgeMappings to specify which OVS bridges are used for accessing external networks. Example In the custom NIC configuration template for the Controller and Compute nodes, configure the bridges with interfaces attached. Example Run the openstack overcloud deploy command and include the templates and the environment files, including this modified custom NIC template and the new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Create an external network ( public1 ) as a flat network and associate it with the configured physical network ( physnet1 ). Configure it as a shared network (using --share ) to let other users create VM instances that connect to the external network directly. Example Create a subnet ( public_subnet ) using the openstack subnet create command. Example Create a VM instance and connect it directly to the newly-created external network. Example Additional resources Environment files in the Installing and managing Red Hat OpenStack Platform with director guide Including environment files in overcloud creation in the Installing and managing Red Hat OpenStack Platform with director guide network create in the Command line interface reference subnet create in the Command line interface reference server create in the Command line interface reference 4.4. How does the flat provider network packet flow work? This section describes in detail how traffic flows to and from an instance with flat provider network configuration. The flow of outgoing traffic in a flat provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly at an external network. After you configure the br-ex external bridge, add the physical interface to the bridge, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram (if using the iptables_hybrid firewall driver): Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx . Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx . This is because the bridge is used to apply the inbound/outbound firewall rules defined by the security group. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int Open vSwitch (OVS) bridge. An example configuration of `qbr-xx`Linux bridge: The configuration of qvo-xx on br-int : Note Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider network. In this example, the VLAN tag is 5 . When the packet reaches qvo-xx , the VLAN tag is appended to the packet header. The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex . Example configuration of the patch-peer on br-int : Example configuration of the patch-peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex strips the VLAN tag (5) and forwards it to the physical interface. In the following example, the output shows the port number of phy-br-ex as 2 . The following output shows any packet that arrives on phy-br-ex ( in_port=2 ) with a VLAN tag of 5 ( dl_vlan=5 ). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the physical interface. If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to the packet. The flow of incoming traffic in a flat provider network This section contains information about the flow of incoming traffic from the external network until it arrives at the interface of the instance. Incoming traffic arrives at eth1 on the physical node. The packet passes to the br-ex bridge. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex . In the following example, int-br-ex uses port number 15 . See the entry containing 15(int-br-ex) : Observing the traffic flow on br-int When the packet arrives at int-br-ex , an OVS flow rule within the br-int bridge amends the packet to add the internal VLAN tag 5 . See the entry for actions=mod_vlan_vid:5 : The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag (vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet ( actions=mod_vlan_vid:5,NORMAL ) and forwards it to qvoxxx . qvoxxx accepts the packet and forwards it to qvbxx , after stripping away the VLAN tag. The packet then reaches the instance. Note VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat provider network; this value was assigned automatically by neutron-openvswitch-agent . This value may be different for your own flat provider network, and can differ for the same network on two separate Compute nodes. Additional resources Section 4.5, "Troubleshooting instance-physical network connections on flat provider networks" 4.5. Troubleshooting instance-physical network connections on flat provider networks The output provided in "How does the flat provider network packet flow work?" provides sufficient debugging information for troubleshooting a flat provider network, should anything go wrong. The following steps contain further information about the troubleshooting process. Procedure Review bridge_mappings . Verify that the physical network name you use is consistent with the contents of the bridge_mapping configuration. Example In this example, the physical network name is, physnet1 . Sample output Example In this example, the contents of the bridge_mapping configuration is also, physnet1 : Sample output Review the network configuration. Confirm that the network is created as external , and uses the flat type: Example In this example, details about the network, provider-flat , is queried: Sample output Review the patch-peer. Verify that br-int and br-ex are connected using a patch-peer int-br-ex <--> phy-br-ex . Sample output Sample output Configuration of the patch-peer on br-ex : This connection is created when you restart the neutron-openvswitch-agent service, if bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Re-check the bridge_mapping setting if the connection is not created after you restart the service. Review the network flows. Run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int , and review whether the flows strip the internal VLAN IDs for outgoing packets, and add VLAN IDs for incoming packets. This flow is first added when you spawn an instance to this network on a specific Compute node. If this flow is not created after spawning the instance, verify that the network is created as flat , is external , and that the physical_network name is correct. In addition, review the bridge_mapping settings. Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a port within br-ex , and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a . Sample output The following output shows eth1 is a port in br-ex : Example The following example demonstrates that eth1 is configured as an OVS port, and that the kernel knows to transfer all packets from the interface, and send them to the OVS bridge br-ex . This can be observed in the entry, master ovs-system . Additional resources Section 4.4, "How does the flat provider network packet flow work?" Configuring bridge mappings 4.6. Configuring VLAN provider networks When you connect multiple VLAN-tagged interfaces on a single NIC to multiple provider networks, these new VLAN provider networks can connect VM instances directly to external networks. Prerequisites You have a physical network, with a range of VLANs. This example uses a physical network called physnet1 , with a range of VLANs, 171-172 . Your Network nodes and Compute nodes are connected to a physical network using a physical interface. This example uses Network nodes and Compute nodes that are connected to a physical network, physnet1 , using a physical interface, eth1 . The switch ports that these interfaces connect to must be configured to trunk the required VLAN ranges. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your orchestration templates. In the YAML environment file under parameter_defaults , use NeutronTypeDrivers to specify your network type drivers. Example Configure the NeutronNetworkVLANRanges setting to reflect the physical network and VLAN ranges in use: Example Create an external network bridge ( br-ex ), and associate a port ( eth1 ) with it. This example configures eth1 to use br-ex : Example Run the openstack overcloud deploy command and include the core templates and the environment files, including this new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Create the external networks as type vlan , and associate them with the configured physical_network . Run the following example command to create two networks: one for VLAN 171, and another for VLAN 172: Example Create a number of subnets and configure them to use the external network. You can use either openstack subnet create or the dashboard to create these subnets. Ensure that the external subnet details you have received from your network administrator are correctly associated with each VLAN. In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24 : Example Additional resources Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide network create in the Command line interface reference subnet create in the Command line interface reference 4.7. How does the VLAN provider network packet flow work? This section describes in detail how traffic flows to and from an instance with VLAN provider network configuration. The flow of outgoing traffic in a VLAN provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly to a VLAN provider external network. This example uses two instances attached to the two VLAN networks (171 and 172). After you configure br-ex , add a physical interface to it, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram: Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to the instance. qbr-xx is connected to br-int using veth pair qvbxx <-> qvoxxx . qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch bridge br-int . Example configuration of qbr-xx on the Linux bridge. This example features two instances and two corresponding linux bridges: The configuration of qvoxx on br-int : qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171 and VLAN tag 3 is associated with VLAN provider network provider-172 . When the packet reaches qvoxx , the this VLAN tag is added to the packet header. The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <-> phy-br-ex . Example patch-peer on br-int : Example configuration of the patch peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex replaces the internal VLAN tag with the actual VLAN tag associated with the VLAN provider network. The output of the following command shows that the port number of phy-br-ex is 4 : The following command shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 2 ( dl_vlan=2 ). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL ) and forwards the packet to the physical interface. The command also shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 3 ( dl_vlan=3 ). Open vSwitch replaces the VLAN tag with 172 ( actions=mod_vlan_vid:172,NORMAL ) and forwards the packet to the physical interface. The neutron-openvswitch-agent adds these rules. This packet is then forwarded to physical interface eth1 . The flow of incoming traffic in a VLAN provider network The following example flow was tested on a Compute node using VLAN tag 2 for provider network provider-171 and VLAN tag 3 for provider network provider-172. The flow uses port 18 on the integration bridge br-int. Your VLAN provider network may require a different configuration. Also, the configuration requirement for a network may differ between two different Compute nodes. The output of the following command shows int-br-ex with port number 18: The output of the following command shows the flow rules on br-int. Incoming flow example This example demonstrates the following br-int OVS flow: A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the physical node. The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex . The packet matches the flow's criteria ( in_port=18,dl_vlan=172 ). The flow actions ( actions=mod_vlan_vid:3,NORMAL ) replace the VLAN tag 172 with internal VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing. Additional resources Section 4.4, "How does the flat provider network packet flow work?" 4.8. Troubleshooting instance-physical network connections on VLAN provider networks Refer to the packet flow described in "How does the VLAN provider network packet flow work?" when troubleshooting connectivity in a VLAN provider network. In addition, review the following configuration options: Procedure Verify that physical network name used in the bridge_mapping configuration matches the physical network name. Example Sample output Example Sample output In this sample output, the physical network name, physnet1 , matches the name used in the bridge_mapping configuration: Confirm that the network was created as external , is type vlan , and uses the correct segmentation_id value: Example Sample output Review the patch-peer. Verify that br-int and br-ex are connected using a patch-peer int-br-ex <--> phy-br-ex . This connection is created while restarting neutron-openvswitch-agent , provided that the bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Recheck the bridge_mapping setting if this is not created even after restarting the service. Review the network flows. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int , and verify that the flows map the internal VLAN IDs to the external VLAN ID ( segmentation_id ). For incoming packets, map the external VLAN ID to the internal VLAN ID. This flow is added by the neutron OVS agent when you spawn an instance to this network for the first time. If this flow is not created after spawning the instance, ensure that the network is created as vlan , is external , and that the physical_network name is correct. In addition, re-check the bridge_mapping settings. Finally, re-check the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that br-ex includes port ethX , and that both ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of the ip a command. Example In this sample output, eth1 is a port in br-ex : Example Sample output In this sample output, eth1 has been added as a port, and that the kernel is configured to move all packets from the interface to the OVS bridge br-ex . This is demonstrated by the entry, master ovs-system . Additional resources Section 4.7, "How does the VLAN provider network packet flow work?" 4.9. Enabling multicast snooping for provider networks in an ML2/OVS deployment To prevent flooding multicast packets to every port in a Red Hat OpenStack Platform (RHOSP) provider network, you must enable multicast snooping. In RHOSP deployments that use the Modular Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS), you do this by declaring the RHOSP Orchestration (heat) NeutronEnableIgmpSnooping parameter in a YAML-formatted environment file. Important You should thoroughly test and understand any multicast snooping configuration before applying it to a production environment. Misconfiguration can break multicasting or cause erratic network behavior. Prerequisites Your configuration must only use ML2/OVS provider networks. Your physical routers must also have IGMP snooping enabled. That is, the physical router must send IGMP query packets on the provider network to solicit regular IGMP reports from multicast group members to maintain the snooping cache in OVS (and for physical networking). An RHOSP Networking service security group rule must be in place to allow inbound IGMP to the VM instances (or port security disabled). In this example, a rule is created for the ping_ssh security group: Example Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates. In the YAML environment file under parameter_defaults , set NeutronEnableIgmpSnooping to true. Important Ensure that you add a whitespace character between the colon (:) and true . Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification Verify that the multicast snooping is enabled. Example Sample output Additional resources Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide Networking (neutron) Parameters in the Overcloud parameters guide Creating a security group in the Creating and managing instances guide 4.10. Enabling multicast in an ML2/OVN deployment To support multicast traffic, modify the deployment's security configuration to allow multicast traffic to reach the virtual machine (VM) instances in the multicast group. To prevent multicast traffic flooding, enable IGMP snooping. Important Test and understand any multicast snooping configuration before applying it to a production environment. Misconfiguration can break multicasting or cause erratic network behavior. Prerequisites An OpenStack deployment with the ML2/OVN mechanism driver. Procedure Configure security to allow multicast traffic to the appropriate VM instances. For instance, create a pair of security group rules to allow IGMP traffic from the IGMP querier to enter and exit the VM instances, and a third rule to allow multicast traffic. Example A security group mySG allows IGMP traffic to enter and exit the VM instances. Another rule allows multicast traffic to reach VM instances. As an alternative to setting security group rules, some operators choose to selectively disable port security on the network. If you choose to disable port security, consider and plan for any related security risks. Set the heat parameter NeutronEnableIgmpSnooping: True in an environment file on the undercloud node. For instance, add the following lines to ovn-extras.yaml. Example Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment and deploy the overcloud. Replace <other_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Verification Verify that the multicast snooping is enabled. List the northbound database Logical_Switch table. Sample output The Networking Service (neutron) igmp_snooping_enable configuration is translated into the mcast_snoop option set in the other_config column of the Logical_Switch table in the OVN Northbound Database. Note that mcast_flood_unregistered is always "false". Show the IGMP groups. Sample output Additional resources Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 4.11. Enabling Compute metadata access Instances connected as described in this chapter are directly attached to the provider external networks, and have external routers configured as their default gateway. No OpenStack Networking (neutron) routers are used. This means that neutron routers cannot be used to proxy metadata requests from instances to the nova-metadata server, which may result in failures while running cloud-init . However, this issue can be resolved by configuring the dhcp agent to proxy metadata requests. You can enable this functionality in /etc/neutron/dhcp_agent.ini . For example: 4.12. Floating IP addresses You can use the same network to allocate floating IP addresses to instances, even if the floating IPs are already associated with private networks. The addresses that you allocate as floating IPs from this network are bound to the qrouter-xxx namespace on the Network node, and perform DNAT-SNAT to the associated private IP address. In contrast, the IP addresses that you allocate for direct external network access are bound directly inside the instance, and allow the instance to communicate directly with external network. | [
"vi /home/stack/templates/my-modules-environment.yaml",
"parameter_defaults: NeutronBridgeMappings: 'physnet1:br-net1,physnet2:br-net2'",
"- type: ovs_bridge name: br-net1 mtu: 1500 use_dhcp: false members: - type: interface name: eth0 mtu: 1500 use_dhcp: false primary: true - type: ovs_bridge name: br-net2 mtu: 1500 use_dhcp: false members: - type: interface name: eth1 mtu: 1500 use_dhcp: false primary: true",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml",
"openstack network create --share --provider-network-type flat --provider-physical-network physnet1 --external public01",
"openstack subnet create --no-dhcp --allocation-pool start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01 public_subnet",
"openstack server create --image rhel --flavor my_flavor --network public01 my_instance",
"brctl show qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7 tap269d4d73-e7",
"ovs-vsctl show Bridge br-int fail_mode: secure Interface \"qvof63599ba-8f\" Port \"qvo269d4d73-e7\" tag: 5 Interface \"qvo269d4d73-e7\"",
"ovs-vsctl show Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}",
"Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal",
"ovs-ofctl show br-ex OFPT_FEATURES_REPLY (xid=0x2): dpid:00003440b5c90dc6 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max",
"ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4703.491s, table=0, n_packets=3620, n_bytes=333744, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=3890.038s, table=0, n_packets=13, n_bytes=1714, idle_age=3764, priority=4,in_port=2,dl_vlan=5 actions=strip_vlan,NORMAL cookie=0x0, duration=4702.644s, table=0, n_packets=10650, n_bytes=447632, idle_age=0, priority=2,in_port=2 actions=drop",
"ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:00004e67212f644d n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 15(int-br-ex): addr:12:4e:44:a9:50:f4 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max",
"ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=5351.536s, table=0, n_packets=12118, n_bytes=510456, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=4537.553s, table=0, n_packets=3489, n_bytes=321696, idle_age=0, priority=3,in_port=15,vlan_tci=0x0000 actions=mod_vlan_vid:5,NORMAL cookie=0x0, duration=5350.365s, table=0, n_packets=628, n_bytes=57892, idle_age=4538, priority=2,in_port=15 actions=drop cookie=0x0, duration=5351.432s, table=23, n_packets=0, n_bytes=0, idle_age=5351, priority=0 actions=drop",
"openstack network show provider-flat",
"| provider:physical_network | physnet1",
"grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini",
"bridge_mappings = physnet1:br-ex",
"openstack network show provider-flat",
"| provider:network_type | flat | | router:external | True |",
"ovs-vsctl show",
"Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}",
"Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal",
"Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"",
"ip a 5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000",
"vi /home/stack/templates/my-modules-environment.yaml",
"parameter_defaults: NeutronTypeDrivers: vxlan,flat,vlan",
"parameter_defaults: NeutronTypeDrivers: 'vxlan,flat,vlan' NeutronNetworkVLANRanges: 'physnet1:171:172'",
"parameter_defaults: NeutronTypeDrivers: 'vxlan,flat,vlan' NeutronNetworkVLANRanges: 'physnet1:171:172' NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-int'",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml",
"openstack network create --provider-network-type vlan --provider-physical-network physnet1 --provider-segment 171 provider-vlan171 openstack network create --provider-network-type vlan --provider-physical-network physnet1 --provider-segment 172 provider-vlan172",
"openstack subnet create --network provider-vlan171 --subnet-range 10.65.217.0/24 --dhcp --gateway 10.65.217.254 subnet-provider-171 openstack subnet create --network provider-vlan172 --subnet-range 10.65.218.0/24 --dhcp --gateway 10.65.218.254 subnet-provider-172",
"brctl show bridge name bridge id STP enabled interfaces qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63 tap84878b78-63 qbr86257b61-5d 8000.3a3c888eeae6 no qvb86257b61-5d tap86257b61-5d",
"options: {peer=phy-br-ex} Port \"qvo86257b61-5d\" tag: 3 Interface \"qvo86257b61-5d\" Port \"qvo84878b78-63\" tag: 2 Interface \"qvo84878b78-63\"",
"Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}",
"Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal",
"ovs-ofctl show br-ex 4(phy-br-ex): addr:32:e7:a1:6b:90:3e config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max",
"ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6527.527s, table=0, n_packets=29211, n_bytes=2725576, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=2939.172s, table=0, n_packets=117, n_bytes=8296, idle_age=58, priority=4,in_port=4,dl_vlan=3 actions=mod_vlan_vid:172,NORMAL cookie=0x0, duration=6111.389s, table=0, n_packets=145, n_bytes=9368, idle_age=98, priority=4,in_port=4,dl_vlan=2 actions=mod_vlan_vid:171,NORMAL cookie=0x0, duration=6526.675s, table=0, n_packets=82, n_bytes=6700, idle_age=2462, priority=2,in_port=4 actions=drop",
"ovs-ofctl show br-int 18(int-br-ex): addr:fe:b7:cb:03:c5:c1 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max",
"ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6770.572s, table=0, n_packets=1239, n_bytes=127795, idle_age=106, priority=1 actions=NORMAL cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL cookie=0x0, duration=6353.898s, table=0, n_packets=5077, n_bytes=482582, idle_age=0, priority=3,in_port=18,dl_vlan=171 actions=mod_vlan_vid:2,NORMAL cookie=0x0, duration=6769.391s, table=0, n_packets=22301, n_bytes=2013101, idle_age=0, priority=2,in_port=18 actions=drop cookie=0x0, duration=6770.463s, table=23, n_packets=0, n_bytes=0, idle_age=6770, priority=0 actions=drop",
"cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL",
"openstack network show provider-vlan171",
"| provider:physical_network | physnet1",
"grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini",
"bridge_mappings = physnet1:br-ex",
"openstack network show provider-vlan171",
"| provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 171 |",
"ovs-vsctl show",
"ovs-vsctl show",
"Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"",
"ip a",
"5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000",
"openstack security group rule create --protocol igmp --ingress ping_ssh",
"vi /home/stack/templates/my-ovs-environment.yaml",
"parameter_defaults: NeutronEnableIgmpSnooping: true",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-ovs-environment.yaml",
"sudo ovs-vsctl list bridge br-int",
"mcast_snooping_enable: true other_config: {mac-table-size=\"50000\", mcast-snooping-disable-flood-unregistered=True}",
"openstack security group rule create --protocol igmp --ingress mySG openstack security group rule create --protocol igmp --egress mySG",
"openstack security group rule create --protocol udp mySG",
"parameter_defaults: NeutronEnableIgmpSnooping: True",
"openstack overcloud deploy --templates ... -e <other_overcloud_environment_files> -e ovn-extras.yaml ...",
"ovn-nbctl list Logical_Switch",
"_uuid : d6a2fbcd-aaa4-4b9e-8274-184238d66a15 other_config : {mcast_flood_unregistered=\"false\", mcast_snoop=\"true\"}",
"ovn-sbctl list IGMP_group",
"_uuid : 2d6cae4c-bd82-4b31-9c63-2d17cbeadc4e address : \"225.0.0.120\" chassis : 34e25681-f73f-43ac-a3a4-7da2a710ecd3 datapath : eaf0f5cc-a2c8-4c30-8def-2bc1ec9dcabc ports : [5eaf9dd5-eae5-4749-ac60-4c1451901c56, 8a69efc5-38c5-48fb-bbab-30f2bf9b8d45]",
"enable_isolated_metadata = True"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/connect-instance_rhosp-network |
Chapter 4. Your privacy and data in cost management | Chapter 4. Your privacy and data in cost management To run cost management, we gather your cost and usage data, but do not collect identifying information such as user names, passwords, or certificates. For more information about your privacy and data, log in to the Customer Portal and see Red Hat's Privacy Policy and our FAQ page . | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_cost_management/privacy-and-data-cost-management |
8.126. libvisual | 8.126. libvisual 8.126.1. RHBA-2014:0840 - libvisual bug fix update Updated libvisual packages that fix one bug are now available for Red Hat Enterprise Linux 6. Libvisual is an abstraction library that comes between applications and audio visualization plug-ins and provides a convenient API to enable the user to draw visualizations easily. Bug Fix BZ# 658064 Previously, due to multilib file conflicts between certain packages in the Optional repository on Red Hat Network, those packages could not have copies for both the primary and secondary architecture installed on the same machine. As a consequence, installation of the packages failed. A patch has been applied to resolve the file conflicts, and the packages can now be installed as expected in the described scenario. Users of libvisual are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libvisual |
Release notes for Eclipse Temurin 11.0.24 | Release notes for Eclipse Temurin 11.0.24 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.24/index |
Chapter 13. Configuring Applications for Single Sign-On | Chapter 13. Configuring Applications for Single Sign-On Some common applications, such as browsers and email clients, can be configured to use Kerberos tickets, SSL certifications, or tokens as a means of authenticating users. The precise procedures to configure any application depend on that application itself. The examples in this chapter (Mozilla Thunderbird and Mozilla Firefox) are intended to give you an idea of how to configure a user application to use Kerberos or other credentials. 13.1. Configuring Firefox to Use Kerberos for Single Sign-On Firefox can use Kerberos for single sign-on (SSO) to intranet sites and other protected websites. For Firefox to use Kerberos, it first has to be configured to send Kerberos credentials to the appropriate KDC. Even after Firefox is configured to pass Kerberos credentials, it still requires a valid Kerberos ticket to use. To generate a Kerberos ticket, use the kinit command and supply the user password for the user on the KDC. To configure Firefox to use Kerberos for SSO: In the address bar of Firefox, type about:config to display the list of current configuration options. In the Filter field, type negotiate to restrict the list of options. Double-click the network.negotiate-auth.trusted-uris entry. Enter the name of the domain against which to authenticate, including the preceding period (.). If you want to add multiple domains, enter them in a comma-separated list. Figure 13.1. Manual Firefox Configuration Important It is not recommended to configure delegation using the network.negotiate-auth.delegation-uris entry in the Firefox configuration options because this enables every Kerberos-aware server to act as the user. Note For more information, see Configuring the Browser for Kerberos Authentication in the Linux Domain Identity, Authentication, and Policy Guide .. | [
"[jsmith@host ~] USD kinit Password for [email protected]:"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring_applications_for_sso |
31.2. Displaying Information About a Module | 31.2. Displaying Information About a Module You can display detailed information about a kernel module by running the modinfo <module_name> command. Note When entering the name of a kernel module as an argument to one of the module-init-tools utilities, do not append a .ko extension to the end of the name. Kernel module names do not have extensions: their corresponding files do. For example, to display information about the e1000e module, which is the Intel PRO/1000 network driver, run: Example 31.1. Listing information about a kernel module with lsmod Here are descriptions of a few of the fields in modinfo output: filename The absolute path to the .ko kernel object file. You can use modinfo -n as a shortcut command for printing only the filename field. description A short description of the module. You can use modinfo -d as a shortcut command for printing only the description field. alias The alias field appears as many times as there are aliases for a module, or is omitted entirely if there are none. depends This field contains a comma-separated list of all the modules this module depends on. Note If a module has no dependencies, the depends field may be omitted from the output. parm Each parm field presents one module parameter in the form parameter_name : description \ufeff , where: parameter_name is the exact syntax you should use when using it as a module parameter on the command line, or in an option line in a .conf file in the /etc/modprobe.d/ directory; and, description is a brief explanation of what the parameter does, along with an expectation for the type of value the parameter accepts (such as int , unit or array of int ) in parentheses. You can list all parameters that the module supports by using the -p option. However, because useful value type information is omitted from modinfo -p output, it is more useful to run: Example 31.2. Listing module parameters | [
"~]# modinfo e1000e filename: /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/net/e1000e/e1000e.ko version: 1.2.7-k2 license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, <[email protected]> srcversion: 93CB73D3995B501872B2982 alias: pci:v00008086d00001503sv*sd*bc*sc*i* alias: pci:v00008086d00001502sv*sd*bc*sc*i* [some alias lines omitted] alias: pci:v00008086d0000105Esv*sd*bc*sc*i* depends: vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int)",
"~]# modinfo e1000e | grep \"^parm\" | sort parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-displaying_information_about_a_module |
8.9. Scanning Containers and Container Images for Vulnerabilities | 8.9. Scanning Containers and Container Images for Vulnerabilities Use these procedures to find security vulnerabilities in a container or a container image. You can use either the oscap-docker command-line utility or the atomic scan command-line utility to find security vulnerabilities in a container or a container image. With oscap-docker , you can use the oscap program to scan container images and containers. With atomic scan , you can use OpenSCAP scanning capabilities to scan container images and containers on the system. You can scan for known CVE vulnerabilities and for configuration compliance. Additionally, you can remediate container images to the specified policy. 8.9.1. Scanning Container Images and Containers for Vulnerabilities Using oscap-docker You can scan containers and container images using the oscap-docker utility. Note The oscap-docker command requires root privileges and the ID of a container is the second argument. Prerequisites The openscap-containers package is installed. Procedure Find the ID of a container or a container image, for example: Scan the container or the container image for vulnerabilities and save results to the vulnerability.html file: Important To scan a container, replace the image-cve argument with container-cve . Verification Inspect the results in a browser of your choice, for example: Additional Resources For more information, see the oscap-docker(8) and oscap(8) man pages. 8.9.2. Scanning Container Images and Containers for Vulnerabilities Using atomic scan With the atomic scan utility, you can scan containers and container images for known security vulnerabilities as defined in the CVE OVAL definitions released by Red Hat . The atomic scan command has the following form: where ID is the ID of the container image or container you want to scan. Warning The atomic scan functionality is deprecated, and the OpenSCAP container image is no longer updated for new vulnerabilities. Therefore, prefer the oscap-docker utility for vulnerability scanning purposes. Use cases To scan all container images, use the --images directive. To scan all containers, use the --containers directive. To scan both types, use the --all directive. To list all available command-line options, use the atomic scan --help command. The default scan type of the atomic scan command is CVE scan . Use it for checking a target for known security vulnerabilities as defined in the CVE OVAL definitions released by Red Hat . Prerequisites You have downloaded and installed the OpenSCAP container image from Red Hat Container Catalog (RHCC) using the atomic install rhel7/openscap command. Procedure Verify you have the latest OpenSCAP container image to ensure the definitions are up to date: Scan a RHEL 7.2 container image with several known security vulnerabilities: Additional Resources Product Documentation for Red Hat Enterprise Linux Atomic Host contains a detailed description of the atomic command usage and containers. The Red Hat Customer Portal provides a guide to the Atomic command-line interface (CLI) . | [
"~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi7/ubi latest 096cae65a207 7 weeks ago 239 MB",
"~]# oscap-docker image-cve 096cae65a207 --report vulnerability.html",
"~]USD firefox vulnerability.html &",
"~]# atomic scan [OPTIONS] [ID]",
"~]# atomic help registry.access.redhat.com/rhel7/openscap | grep version",
"~]# atomic scan registry.access.redhat.com/rhel7:7.2 docker run -t --rm -v /etc/localtime:/etc/localtime -v /run/atomic/2017-11-01-14-49-36-614281:/scanin -v /var/lib/atomic/openscap/2017-11-01-14-49-36-614281:/scanout:rw,Z -v /etc/oscapd:/etc/oscapd:ro registry.access.redhat.com/rhel7/openscap oscapd-evaluate scan --no-standard-compliance --targets chroots-in-dir:///scanin --output /scanout registry.access.redhat.com/rhel7:7.2 (98a88a8b722a718) The following issues were found: RHSA-2017:2832: nss security update (Important) Severity: Important RHSA URL: https://access.redhat.com/errata/RHSA-2017:2832 RHSA ID: RHSA-2017:2832-01 Associated CVEs: CVE ID: CVE-2017-7805 CVE URL: https://access.redhat.com/security/cve/CVE-2017-7805"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/scanning-container-and-container-images-for-vulnerabilities_scanning-the-system-for-configuration-compliance-and-vulnerabilities |
Chapter 4. Updating Capsule Server | Chapter 4. Updating Capsule Server Update Capsule Servers to the patch version. Procedure Synchronize the satellite-capsule-6.16-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. Ensure that the Satellite Maintenance repository is enabled: On Red Hat Enterprise Linux 9: On Red Hat Enterprise Linux 8: Use the health check option to determine if the system is ready for update: Review the results and address any highlighted error conditions before performing the update. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the update progress without staying connected to the command shell continuously. If you lose connection to the command shell where the update command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Perform the update: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: | [
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"satellite-maintain update check",
"satellite-maintain update run",
"dnf needs-restarting --reboothint",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/updating_red_hat_satellite/updating-smart-proxy_updating |
Chapter 56. BrokerCapacityOverride schema reference | Chapter 56. BrokerCapacityOverride schema reference Used in: BrokerCapacity Property Description brokers List of Kafka brokers (broker identifiers). integer array cpu Broker capacity for CPU resource in cores or millicores. For example, 1, 1.500, 1500m. For more information on valid CPU resource units see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu . string inboundNetwork Broker capacity for inbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. string outboundNetwork Broker capacity for outbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-BrokerCapacityOverride-reference |
Chapter 4. Deploy standalone Multicloud Object Gateway | Chapter 4. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_microsoft_azure/deploy-standalone-multicloud-object-gateway |
Chapter 3. Reviewing the reports | Chapter 3. Reviewing the reports Use a browser to open the index.html file located in the report output directory. This opens a landing page that lists the applications that were processed. Each row contains a high-level overview of the story points, number of incidents, and technologies encountered in that application. Figure 3.1. Application list Note The incidents and estimated story points change as new rules are added to MTA. The values here may not match what you see when you test this application. The following table lists all of the reports and pages that can be accessed from this main MTA landing page. Click the name of the application, jee-example-app-1.0.0.ear , to view the application report. Page How to Access Application Click the name of the application. Technologies report Click the Technologies link at the top of the page. Archives shared by multiple applications Click the Archives shared by multiple applications link. Note that this link is only available when there are shared archives across multiple applications. Rule providers execution overview Click the Rule providers execution overview link at the bottom of the page. Note that if an application shares archives with other analyzed applications, you will see a breakdown of how many story points are from shared archives and how many are unique to this application. Figure 3.2. Shared archives Information about the archives that are shared among applications can be found in the Archives Shared by Multiple Applications reports. 3.1. Application report 3.1.1. Dashboard Access this report from the report landing page by clicking on the application name in the Application List . The dashboard gives an overview of the entire application migration effort. It summarizes: The incidents and story points by category The incidents and story points by level of effort of the suggested changes The incidents by package Figure 3.3. Dashboard The top navigation bar lists the various reports that contain additional details about the migration of this application. Note that only those reports that are applicable to the current application will be available. Report Description Issues Provides a concise summary of all issues that require attention. Insights Provides information about the technologies used in the application and their usage in the code. However, these Insights do not impact the migration. Application details Provides a detailed overview of all resources found within the application that may need attention during the migration. Technologies Displays all embedded libraries grouped by functionality, allowing you to quickly view the technologies used in each application. Dependencies Displays all Java-packaged dependencies found within the application. Unparsable Shows all files that MTA could not parse in the expected format. For instance, a file with a .xml or .wsdl suffix is assumed to be an XML file. If the XML parser fails, the issue is reported here and also where the individual file is listed. Remote services Displays all remote services references that were found within the application. EJBs Contains a list of EJBs found within the application. JBPM Contains all of the JBPM-related resources that were discovered during analysis. JPA Contains details on all JPA-related resources that were found in the application. Hibernate Contains details on all Hibernate-related resources that were found in the application. Server resources Displays all server resources (for example, JNDI resources) in the input application. Spring Beans Contains a list of Spring Beans found during the analysis. Hard-coded IP addresses Provides a list of all hard-coded IP addresses that were found in the application. Ignored files Lists the files found in the application that, based on certain rules and MTA configuration, were not processed. See the --userIgnorePath option for more information. About Describes the current version of MTA and provides helpful links for further assistance. 3.1.2. Issues report Access this report from the dashboard by clicking the Issues link. This report includes details about every issue that was raised by the selected migration paths. The following information is provided for each issue encountered: A title to summarize the issue. The total number of incidents, or times the issue was encountered. The rule story points to resolve a single instance of the issue. The estimated level of effort to resolve the issue. The total story points to resolve every instance encountered. This is calculated by multiplying the number of incidents found by the story points per incident. Figure 3.4. Issues report Each reported issue may be expanded, by clicking on the title, to obtain additional details. The following information is provided. A list of files where the incidents occurred, along with the number of incidents within each file. If the file is a Java source file, then clicking the filename will direct you to the corresponding Source report. A detailed description of the issue. This description outlines the problem, provides any known solutions, and references supporting documentation regarding either the issue or resolution. A direct link, entitled Show Rule , to the rule that generated the issue. Figure 3.5. Expanded issue Issues are sorted into four categories by default. Information on these categories is available at ask Category. 3.1.3. Insights Important Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Previously, a violation generated by a rule with zero effort was listed as an issue in the static report. This is now listed as an insight instead. Issues are generated by general rules, whereas string tags are generated by tagging rules. String tags indicate the presence of a technology but do not show the code location. With the introduction of Insights, you can see the technology used in the application along with its usage in the code. For example, a rule searching for deprecated API usage in the code that does not impact the current migration but can be tracked and fixed when needed in the future. Unlike issues, insights do not need to be fixed for a successful migration. They are generated by any rule that doesn't have a positive effort value and category assigned. They might have a message and tag. Note Insights are generated automatically if applicable or present. Currently, MTA supports generating Insights when application anaylsis is done using CLI. You can view Insights under the Insights tab in the static report. Example: Insights generated by a tagging rule with undefined effort - customVariables: [] description: Embedded library - Apache Wicket labels: - konveyor.io/include=always links: [] ruleID: mvc-01000 tag: - Apache Wicket - Embedded library - Apache Wicket when: builtin.file: pattern: .*wicket.*\.jar Example: Insights generated by a non-tagging rule with zero effort - category: potential customVariables: [] description: RESTful Web Services @Context annotation has been deprecated effort: 0 message: Future versions of this API will no longer support `@Context` and related types such as `ContextResolver`. ruleID: jakarta-ws-rs-00001 when: java.referenced: location: ANNOTATION pattern: jakarta.ws.rs.core.Context 3.1.4. Application details report Access this report from the dashboard by clicking the Application Details link. The report lists the story points, the Java incidents by package, and a count of the occurrences of the technologies found in the application. is a display of application messages generated during the migration process. Finally, there is a breakdown of this information for each archive analyzed during the process. Figure 3.6. Application Details report Expand the jee-example-app-1.0.0.ear/jee-example-services.jar to review the story points, Java incidents by package, and a count of the occurrences of the technologies found in this archive. This summary begins with a total of the story points assigned to its migration, followed by a table detailing the changes required for each file in the archive. The report contains the following columns. Column Name Description Name The name of the file being analyzed. Technology The type of file being analyzed, for example, Decompiled Java File or Properties . Issues Warnings about areas of code that need review or changes. Story Points Level of effort required to migrate the file. Note that if an archive is duplicated several times in an application, it will be listed just once in the report and will be tagged with [Included multiple times] . Figure 3.7. Duplicate archive in an application The story points for archives that are duplicated within an application will be counted only once in the total story point count for that application. 3.1.5. Technologies report Access this report from the dashboard by clicking the Technologies link. The report lists the occurrences of technologies, grouped by function, in the analyzed application. It is an overview of the technologies found in the application, and is designed to assist users in quickly understanding each application's purpose. The image below shows the technologies used in the jee-example-app . Figure 3.8. Technologies in an application 3.1.6. Source report The Source report displays the migration issues in the context of the source file in which they were discovered. Figure 3.9. Source report 3.2. Technologies report Access this report from the report landing page by clicking the Technologies link. This report provides an aggregate listing of the technologies used, grouped by function, for the analyzed applications. It shows how the technologies are distributed, and is typically reviewed after analyzing a large number of applications to group the applications and identify patterns. It also shows the size, number of libraries, and story point totals of each application. Clicking any of the headers, such as Markup , sorts the results in descending order. Selecting the same header again will resort the results in ascending order. The currently selected header is identified in bold, to a directional arrow, indicating the direction of the sort. Figure 3.10. Technologies used across multiple applications | [
"- customVariables: [] description: Embedded library - Apache Wicket labels: - konveyor.io/include=always links: [] ruleID: mvc-01000 tag: - Apache Wicket - Embedded library - Apache Wicket when: builtin.file: pattern: .*wicket.*\\.jar",
"- category: potential customVariables: [] description: RESTful Web Services @Context annotation has been deprecated effort: 0 message: Future versions of this API will no longer support `@Context` and related types such as `ContextResolver`. ruleID: jakarta-ws-rs-00001 when: java.referenced: location: ANNOTATION pattern: jakarta.ws.rs.core.Context"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/cli_guide/review-reports_cli-guide |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_8/providing-direct-documentation-feedback_openjdk |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/pr01 |
Chapter 149. XJ | Chapter 149. XJ Since Camel 3.0 Only producer is supported The XJ component allows you to convert XML and JSON documents directly forth and back without the need of intermediate java objects. You can even specify an XSLT stylesheet to convert directly to the target JSON / XML (domain) model. 149.1. Dependencies When using xj with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xj-starter</artifactId> </dependency> 149.2. URI format Note The XJ component extends the XSLT component and therefore it supports all options provided by the XSLT component as well. At least look at the XSLT component documentation how to configure the xsl template. The transformDirection option is mandatory and must be either XML2JSON or JSON2XML. The templateName parameter allows to use identify transforma by specifying the name identity . 149.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 149.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 149.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 149.4. Component Options The XJ component supports 11 options, which are listed below. Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonConfigurationProperties (advanced) To set custom Saxon configuration properties. Map saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory 149.5. Endpoint Options The XJ endpoint is configured using URI syntax: with the following path and query parameters: 149.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 149.5.2. Query Parameters (19 parameters) Name Description Default Type allowStAX (producer) Whether to allow using StAX as the javax.xml.transform.Source. You can enable this if the XSLT library supports StAX such as the Saxon library (camel-saxon). The Xalan library (default in JVM) does not support StAXSource. true boolean contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key XsltConstants.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. Enum values: string bytes DOM file string XsltOutput transformDirection (producer) Required Transform direction. Either XML2JSON or JSON2XML. Enum values: XML2JSON JSON2XML TransformDirection transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can comma to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactory (advanced) To use a custom XSLT transformer factory. TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom javax.xml.transform.URIResolver. URIResolver xsltMessageLogger (advanced) A consumer to messages generated during XSLT transformations. XsltMessageLogger 149.6. Message Headers The XJ component supports 1 message header(s), which is/are listed below: Name Description Default Type CamelXsltFileName (producer) Constant: XSLT_FILE_NAME The XSLT file name. String 149.7. Using XJ endpoints 149.7.1. Converting JSON to XML The following route does an "identity" transform of the message because no xslt stylesheet is given. In the context of xml to xml transformations, "Identity" transform means that the output document is just a copy of the input document. In case of XJ it means it transforms the json document to an equivalent xml representation. from("direct:start"). to("xj:identity?transformDirection=JSON2XML"); Sample: The input: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } will output <?xml version="1.0" encoding="UTF-8"?> <object xmlns:xj="http://camel.apache.org/component/xj" xj:type="object"> <object xj:name="firstname" xj:type="string">camel</object> <object xj:name="lastname" xj:type="string">apache</object> <object xj:name="personalnumber" xj:type="int">42</object> <object xj:name="active" xj:type="boolean">true</object> <object xj:name="ranking" xj:type="float">3.1415926</object> <object xj:name="roles" xj:type="array"> <object xj:type="string">a</object> <object xj:type="object"> <object xj:name="x" xj:type="null">null</object> </object> </object> <object xj:name="state" xj:type="object"> <object xj:name="needsWater" xj:type="boolean">true</object> </object> </object> As can be seen in the output above, XJ writes some metadata in the resulting xml that can be used in further processing: XJ metadata nodes are always in the http://camel.apache.org/component/xj namespace. JSON key names are placed in the xj:name attribute. The parsed JSON type can be found in the xj:type attribute. The above example already contains all possible types. Generated XML elements are always named "object". Now we can apply a stylesheet, for example: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xj="http://camel.apache.org/component/xj" exclude-result-prefixes="xj"> <xsl:output omit-xml-declaration="no" encoding="UTF-8" method="xml" indent="yes"/> <xsl:template match="/"> <person> <xsl:apply-templates select="//object"/> </person> </xsl:template> <xsl:template match="object[@xj:type != 'object' and @xj:type != 'array' and string-length(@xj:name) > 0]"> <xsl:variable name="name" select="@xj:name"/> <xsl:element name="{USDname}"> <xsl:value-of select="text()"/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"/> </xsl:stylesheet> to the above sample by specifying the template on the endpoint: from("direct:start"). to("xj:com/example/json2xml.xsl?transformDirection=JSON2XML"); and get the following output: <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <x>null</x> <needsWater>true</needsWater> </person> 149.7.2. Converting XML to JSON Based on the explanations above an "identity" transform will be performed when no stylesheet is given: from("direct:start"). to("xj:identity?transformDirection=XML2JSON"); Given the sample input <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <roles> <entry>a</entry> <entry> <x>null</x> </entry> </roles> <state> <needsWater>true</needsWater> </state> </person> will result in { "firstname": "camel", "lastname": "apache", "personalnumber": "42", "active": "true", "ranking": "3.1415926", "roles": [ "a", { "x": "null" } ], "state": { "needsWater": "true" } } You may have noted that the input xml and output json is very similar to the examples above when converting from json to xml altough nothing special is done here. We only transformed an arbitrary XML document to json. XJ uses the following rules by default: The XML root element can be named somehow, it will always end in a json root object declaration '\{}' The json key name is the name of the xml element If there is an name clash as in "<roles>" above where two "<entry>" elements exists a json array will be generated. XML elements with text-only-child-nodes will result in the usual key/string-value pair. Mixed content elements results in key/child-object pair as seen in "<state>" above. Now we can apply again a stylesheet, for example: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xj="http://camel.apache.org/component/xj" exclude-result-prefixes="xj"> <xsl:output omit-xml-declaration="no" encoding="UTF-8" method="xml" indent="yes"/> <xsl:template match="/"> <xsl:apply-templates/> </xsl:template> <xsl:template match="personalnumber"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'int'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="active|needsWater"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'boolean'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="ranking"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'float'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="roles"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'array'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="*[normalize-space(text()) = 'null']"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'null'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> to the sample above by specifying the template on the endpoint: from("direct:start"). to("xj:com/example/xml2json.xsl?transformDirection=XML2JSON"); and get the following output: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } Note, this transformation resulted in exactly the same json document as we used as input to the json2xml convertion. The following XML document is that what is passed to XJ after xsl transformation: <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber xmlns:xj="http://camel.apache.org/component/xj" xj:type="int">42</personalnumber> <active xmlns:xj="http://camel.apache.org/component/xj" xj:type="boolean">true</active> <ranking xmlns:xj="http://camel.apache.org/component/xj" xj:type="float">3.1415926</ranking> <roles xmlns:xj="http://camel.apache.org/component/xj" xj:type="array"> <entry>a</entry> <entry> <x xj:type="null">null</x> </entry> </roles> <state> <needsWater xmlns:xj="http://camel.apache.org/component/xj" xj:type="boolean">true</needsWater> </state> </person> In the stylesheet we just provided the minimal required type hints to get the same result. The supported type hints are exactly the same as XJ writes to a XML document when converting from json to xml. In the end that means that we can feed back in the result document from the json to xml transformation sample above: <?xml version="1.0" encoding="UTF-8"?> <object xmlns:xj="http://camel.apache.org/component/xj" xj:type="object"> <object xj:name="firstname" xj:type="string">camel</object> <object xj:name="lastname" xj:type="string">apache</object> <object xj:name="personalnumber" xj:type="int">42</object> <object xj:name="active" xj:type="boolean">true</object> <object xj:name="ranking" xj:type="float">3.1415926</object> <object xj:name="roles" xj:type="array"> <object xj:type="string">a</object> <object xj:type="object"> <object xj:name="x" xj:type="null">null</object> </object> </object> <object xj:name="state" xj:type="object"> <object xj:name="needsWater" xj:type="boolean">true</object> </object> </object> and get the same output again: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } As seen in the example above: * xj:type lets you specify exactly the desired output type * xj:name lets you overrule the json key name. This is required when you want to generate key names which contains chars that aren't allowed in XML element names. 149.7.2.1. Available type hints @xj:type= Description object Generate a json object array Generate a json array string Generate a json string int Generate a json number without fractional part float Generate a json number with fractional part boolean Generate a json boolean null Generate an empty value, using the word null 149.8. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.xj.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xj.content-cache Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true Boolean camel.component.xj.enabled Whether to enable auto configuration of the xj component. This is enabled by default. Boolean camel.component.xj.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xj.saxon-configuration To use a custom Saxon configuration. The option is a net.sf.saxon.Configuration type. Configuration camel.component.xj.saxon-configuration-properties To set custom Saxon configuration properties. Map camel.component.xj.saxon-extension-functions Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String camel.component.xj.secure-processing Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true Boolean camel.component.xj.transformer-factory-class To use a custom XSLT transformer factory, specified as a FQN class name. String camel.component.xj.transformer-factory-configuration-strategy A configuration strategy to apply on freshly created instances of TransformerFactory. The option is a org.apache.camel.component.xslt.TransformerFactoryConfigurationStrategy type. TransformerFactoryConfigurationStrategy camel.component.xj.uri-resolver To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. The option is a javax.xml.transform.URIResolver type. URIResolver camel.component.xj.uri-resolver-factory To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. The option is a org.apache.camel.component.xslt.XsltUriResolverFactory type. XsltUriResolverFactory | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xj-starter</artifactId> </dependency>",
"xj:templateName?transformDirection=XML2JSON|JSON2XML[&options]",
"xj:resourceUri",
"from(\"direct:start\"). to(\"xj:identity?transformDirection=JSON2XML\");",
"{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <object xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"object\"> <object xj:name=\"firstname\" xj:type=\"string\">camel</object> <object xj:name=\"lastname\" xj:type=\"string\">apache</object> <object xj:name=\"personalnumber\" xj:type=\"int\">42</object> <object xj:name=\"active\" xj:type=\"boolean\">true</object> <object xj:name=\"ranking\" xj:type=\"float\">3.1415926</object> <object xj:name=\"roles\" xj:type=\"array\"> <object xj:type=\"string\">a</object> <object xj:type=\"object\"> <object xj:name=\"x\" xj:type=\"null\">null</object> </object> </object> <object xj:name=\"state\" xj:type=\"object\"> <object xj:name=\"needsWater\" xj:type=\"boolean\">true</object> </object> </object>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" xmlns:xj=\"http://camel.apache.org/component/xj\" exclude-result-prefixes=\"xj\"> <xsl:output omit-xml-declaration=\"no\" encoding=\"UTF-8\" method=\"xml\" indent=\"yes\"/> <xsl:template match=\"/\"> <person> <xsl:apply-templates select=\"//object\"/> </person> </xsl:template> <xsl:template match=\"object[@xj:type != 'object' and @xj:type != 'array' and string-length(@xj:name) > 0]\"> <xsl:variable name=\"name\" select=\"@xj:name\"/> <xsl:element name=\"{USDname}\"> <xsl:value-of select=\"text()\"/> </xsl:element> </xsl:template> <xsl:template match=\"@*|node()\"/> </xsl:stylesheet>",
"from(\"direct:start\"). to(\"xj:com/example/json2xml.xsl?transformDirection=JSON2XML\");",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <x>null</x> <needsWater>true</needsWater> </person>",
"from(\"direct:start\"). to(\"xj:identity?transformDirection=XML2JSON\");",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <roles> <entry>a</entry> <entry> <x>null</x> </entry> </roles> <state> <needsWater>true</needsWater> </state> </person>",
"{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": \"42\", \"active\": \"true\", \"ranking\": \"3.1415926\", \"roles\": [ \"a\", { \"x\": \"null\" } ], \"state\": { \"needsWater\": \"true\" } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" xmlns:xj=\"http://camel.apache.org/component/xj\" exclude-result-prefixes=\"xj\"> <xsl:output omit-xml-declaration=\"no\" encoding=\"UTF-8\" method=\"xml\" indent=\"yes\"/> <xsl:template match=\"/\"> <xsl:apply-templates/> </xsl:template> <xsl:template match=\"personalnumber\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'int'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"active|needsWater\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'boolean'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"ranking\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'float'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"roles\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'array'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"*[normalize-space(text()) = 'null']\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'null'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"@*|node()\"> <xsl:copy> <xsl:apply-templates select=\"@*|node()\"/> </xsl:copy> </xsl:template> </xsl:stylesheet>",
"from(\"direct:start\"). to(\"xj:com/example/xml2json.xsl?transformDirection=XML2JSON\");",
"{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"int\">42</personalnumber> <active xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"boolean\">true</active> <ranking xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"float\">3.1415926</ranking> <roles xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"array\"> <entry>a</entry> <entry> <x xj:type=\"null\">null</x> </entry> </roles> <state> <needsWater xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"boolean\">true</needsWater> </state> </person>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <object xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"object\"> <object xj:name=\"firstname\" xj:type=\"string\">camel</object> <object xj:name=\"lastname\" xj:type=\"string\">apache</object> <object xj:name=\"personalnumber\" xj:type=\"int\">42</object> <object xj:name=\"active\" xj:type=\"boolean\">true</object> <object xj:name=\"ranking\" xj:type=\"float\">3.1415926</object> <object xj:name=\"roles\" xj:type=\"array\"> <object xj:type=\"string\">a</object> <object xj:type=\"object\"> <object xj:name=\"x\" xj:type=\"null\">null</object> </object> </object> <object xj:name=\"state\" xj:type=\"object\"> <object xj:name=\"needsWater\" xj:type=\"boolean\">true</object> </object> </object>",
"{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xj-component-starter |
Chapter 35. Using cgroupfs to manually manage cgroups | Chapter 35. Using cgroupfs to manually manage cgroups You can manage cgroup hierarchies on your system by creating directories on the cgroupfs virtual file system. The file system is mounted by default on the /sys/fs/cgroup/ directory and you can specify desired configurations in dedicated control files. Important In general, Red Hat recommends you use systemd for controlling the usage of system resources. You should manually configure the cgroups virtual file system only in special cases. For example, when you need to use cgroup-v1 controllers that have no equivalents in cgroup-v2 hierarchy. 35.1. Creating cgroups and enabling controllers in cgroups-v2 file system You can manage the control groups ( cgroups ) by creating or removing directories and by writing to files in the cgroups virtual file system. The file system is by default mounted on the /sys/fs/cgroup/ directory. To use settings from the cgroups controllers, you also need to enable the desired controllers for child cgroups . The root cgroup has, by default, enabled the memory and pids controllers for its child cgroups . Therefore, Red Hat recommends to create at least two levels of child cgroups inside the /sys/fs/cgroup/ root cgroup . This way you optionally remove the memory and pids controllers from the child cgroups and maintain better organizational clarity of cgroup files. Prerequisites You have root permissions. Procedure Create the /sys/fs/cgroup/Example/ directory: The /sys/fs/cgroup/Example/ directory defines a child group. When you create the /sys/fs/cgroup/Example/ directory, some cgroups-v2 interface files are automatically created in the directory. The /sys/fs/cgroup/Example/ directory contains also controller-specific files for the memory and pids controllers. Optional: Inspect the newly created child control group: The example output shows general cgroup control interface files such as cgroup.procs or cgroup.controllers . These files are common to all control groups, regardless of enabled controllers. The files such as memory.high and pids.max relate to the memory and pids controllers, which are in the root control group ( /sys/fs/cgroup/ ), and are enabled by default by systemd . By default, the newly created child group inherits all settings from the parent cgroup . In this case, there are no limits from the root cgroup . Verify that the desired controllers are available in the /sys/fs/cgroup/cgroup.controllers file: Enable the desired controllers. In this example it is cpu and cpuset controllers: These commands enable the cpu and cpuset controllers for the immediate child groups of the /sys/fs/cgroup/ root control group. Including the newly created Example control group. A child group is where you can specify processes and apply control checks to each of the processes based on your criteria. Users can read the contents of the cgroup.subtree_control file at any level to get an idea of what controllers are going to be available for enablement in the immediate child group. Note By default, the /sys/fs/cgroup/cgroup.subtree_control file in the root control group contains memory and pids controllers. Enable the desired controllers for child cgroups of the Example control group: This command ensures that the immediate child control group will only have controllers relevant to regulate the CPU time distribution - not to memory or pids controllers. Create the /sys/fs/cgroup/Example/tasks/ directory: The /sys/fs/cgroup/Example/tasks/ directory defines a child group with files that relate purely to cpu and cpuset controllers. You can now assign processes to this control group and utilize cpu and cpuset controller options for your processes. Optional: Inspect the child control group: Important The cpu controller is only activated if the relevant child control group has at least 2 processes which compete for time on a single CPU. Verification Optional: confirm that you have created a new cgroup with only the desired controllers active: Additional resources What are kernel resource controllers Mounting cgroups-v1 cgroups(7) , sysfs(5) manual pages 35.2. Controlling distribution of CPU time for applications by adjusting CPU weight You need to assign values to the relevant files of the cpu controller to regulate distribution of the CPU time to applications under the specific cgroup tree. Prerequisites You have root permissions. You have applications for which you want to control distribution of CPU time. You created a two level hierarchy of child control groups inside the /sys/fs/cgroup/ root control group as in the following example: You enabled the cpu controller in the parent control group and in child control groups similarly as described in Creating cgroups and enabling controllers in cgroups-v2 file system . Procedure Configure desired CPU weights to achieve resource restrictions within the control groups: Add the applications' PIDs to the g1 , g2 , and g3 child groups: The example commands ensure that desired applications become members of the Example/g*/ child cgroups and will get their CPU time distributed as per the configuration of those cgroups. The weights of the children cgroups ( g1 , g2 , g3 ) that have running processes are summed up at the level of the parent cgroup ( Example ). The CPU resource is then distributed proportionally based on the respective weights. As a result, when all processes run at the same time, the kernel allocates to each of them the proportionate CPU time based on their respective cgroup's cpu.weight file: Child cgroup cpu.weight file CPU time allocation g1 150 ~50% (150/300) g2 100 ~33% (100/300) g3 50 ~16% (50/300) The value of the cpu.weight controller file is not a percentage. If one process stopped running, leaving cgroup g2 with no running processes, the calculation would omit the cgroup g2 and only account weights of cgroups g1 and g3 : Child cgroup cpu.weight file CPU time allocation g1 150 ~75% (150/200) g3 50 ~25% (50/200) Important If a child cgroup has multiple running processes, the CPU time allocated to the cgroup is distributed equally among its member processes. Verification Verify that the applications run in the specified control groups: The command output shows the processes of the specified applications that run in the Example/g*/ child cgroups. Inspect the current CPU consumption of the throttled applications: Note All processes run on a single CPU for clear illustration. The CPU weight applies the same principles when used on multiple CPUs. Notice that the CPU resource for the PID 33373 , PID 33374 , and PID 33377 was allocated based on the 150, 100, and 50 weights you assigned to the respective child cgroups. The weights correspond to around 50%, 33%, and 16% allocation of CPU time for each application. 35.3. Mounting cgroups-v1 During the boot process, RHEL 9 mounts the cgroup-v2 virtual filesystem by default. To utilize cgroup-v1 functionality in limiting resources for your applications, manually configure the system. Note Both cgroup-v1 and cgroup-v2 are fully enabled in the kernel. There is no default control group version from the kernel point of view, and is decided by systemd to mount at startup. Prerequisites You have root permissions. Procedure Configure the system to mount cgroups-v1 by default during system boot by the systemd system and service manager: This adds the necessary kernel command-line parameters to the current boot entry. To add the same parameters to all kernel boot entries: Reboot the system for the changes to take effect. Verification Verify that the cgroups-v1 filesystem was mounted: The cgroups-v1 filesystems that correspond to various cgroup-v1 controllers, were successfully mounted on the /sys/fs/cgroup/ directory. Inspect the contents of the /sys/fs/cgroup/ directory: The /sys/fs/cgroup/ directory, also called the root control group , by default, contains controller-specific directories such as cpuset . In addition, there are some directories related to systemd . Additional resources What are kernel resource controllers cgroups(7) , sysfs(5) manual pages cgroup-v2 enabled by default in RHEL 9 35.4. Setting CPU limits to applications using cgroups-v1 To configure CPU limits to an application by using control groups version 1 ( cgroups-v1 ), use the /sys/fs/ virtual file system. Prerequisites You have root permissions. You have an application to restrict its CPU consumption installed on your system. You configured the system to mount cgroups-v1 by default during system boot by the systemd system and service manager: This adds the necessary kernel command-line parameters to the current boot entry. Procedure Identify the process ID (PID) of the application that you want to restrict in CPU consumption: The sha1sum example application with PID 6955 consumes a large amount of CPU resources. Create a sub-directory in the cpu resource controller directory: This directory represents a control group, where you can place specific processes and apply certain CPU limits to the processes. At the same time, a number of cgroups-v1 interface files and cpu controller-specific files will be created in the directory. Optional: Inspect the newly created control group: Files, such as cpuacct.usage , cpu.cfs._period_us represent specific configurations and/or limits, which can be set for processes in the Example control group. Note that the file names are prefixed with the name of the control group controller they belong to. By default, the newly created control group inherits access to the system's entire CPU resources without a limit. Configure CPU limits for the control group: The cpu.cfs_period_us file represents how frequently a control group's access to CPU resources must be reallocated. The time period is in microseconds (ms, "us"). The upper limit is 1 000 000 microseconds and the lower limit is 1000 microseconds. The cpu.cfs_quota_us file represents the total amount of time in microseconds for which all processes in a control group can collectively run during one period, as defined by cpu.cfs_period_us . When processes in a control group use up all the time specified by the quota during a single period, they are throttled for the remainder of the period and not allowed to run until the period. The lower limit is 1000 microseconds. The example commands above set the CPU time limits so that all processes collectively in the Example control group will be able to run only for 0.2 seconds (defined by cpu.cfs_quota_us ) out of every 1 second (defined by cpu.cfs_period_us ). Optional: Verify the limits: Add the application's PID to the Example control group: This command ensures that a specific application becomes a member of the Example control group and does not exceed the CPU limits configured for the Example control group. The PID must represent an existing process in the system. The PID 6955 here was assigned to the sha1sum /dev/zero & process, used to illustrate the use case of the cpu controller. Verification Verify that the application runs in the specified control group: The process of an application runs in the Example control group applying CPU limits to the application's process. Identify the current CPU consumption of your throttled application: Note that the CPU consumption of the PID 6955 has decreased from 99% to 20%. Note The cgroups-v2 counterpart for cpu.cfs_period_us and cpu.cfs_quota_us is the cpu.max file. The cpu.max file is available through the cpu controller. Additional resources Introducing kernel resource controllers cgroups(7) , sysfs(5) manual pages | [
"mkdir /sys/fs/cgroup/Example/",
"ll /sys/fs/cgroup/Example/ -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.procs ... -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.subtree_control -r- r- r--. 1 root root 0 Jun 1 10:33 memory.events.local -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.high -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.low ... -r- r- r--. 1 root root 0 Jun 1 10:33 pids.current -r- r- r--. 1 root root 0 Jun 1 10:33 pids.events -rw-r- r--. 1 root root 0 Jun 1 10:33 pids.max",
"cat /sys/fs/cgroup/cgroup.controllers cpuset cpu io memory hugetlb pids rdma",
"echo \"+cpu\" >> /sys/fs/cgroup/cgroup.subtree_control echo \"+cpuset\" >> /sys/fs/cgroup/cgroup.subtree_control",
"echo \"+cpu +cpuset\" >> /sys/fs/cgroup/Example/cgroup.subtree_control",
"mkdir /sys/fs/cgroup/Example/tasks/",
"ll /sys/fs/cgroup/Example/tasks -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.depth -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.descendants -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.procs -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.subtree_control -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.threads -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.type -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.max -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.effective -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.partition -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems.effective -r- r- r--. 1 root root 0 Jun 1 11:45 cpu.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight.nice -rw-r- r--. 1 root root 0 Jun 1 11:45 io.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 memory.pressure",
"cat /sys/fs/cgroup/Example/tasks/cgroup.controllers cpuset cpu",
"... βββ Example β βββ g1 β βββ g2 β βββ g3 ...",
"echo \"150\" > /sys/fs/cgroup/Example/g1/cpu.weight echo \"100\" > /sys/fs/cgroup/Example/g2/cpu.weight echo \"50\" > /sys/fs/cgroup/Example/g3/cpu.weight",
"echo \"33373\" > /sys/fs/cgroup/Example/g1/cgroup.procs echo \"33374\" > /sys/fs/cgroup/Example/g2/cgroup.procs echo \"33377\" > /sys/fs/cgroup/Example/g3/cgroup.procs",
"cat /proc/33373/cgroup /proc/33374/cgroup /proc/33377/cgroup 0::/Example/g1 0::/Example/g2 0::/Example/g3",
"top top - 05:17:18 up 1 day, 18:25, 1 user, load average: 3.03, 3.03, 3.00 Tasks: 95 total, 4 running, 91 sleeping, 0 stopped, 0 zombie %Cpu(s): 18.1 us, 81.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st MiB Mem : 3737.0 total, 3233.7 free, 132.8 used, 370.5 buff/cache MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 3373.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 33373 root 20 0 18720 1748 1460 R 49.5 0.0 415:05.87 sha1sum 33374 root 20 0 18720 1756 1464 R 32.9 0.0 412:58.33 sha1sum 33377 root 20 0 18720 1860 1568 R 16.3 0.0 411:03.12 sha1sum 760 root 20 0 416620 28540 15296 S 0.3 0.7 0:10.23 tuned 1 root 20 0 186328 14108 9484 S 0.0 0.4 0:02.00 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthread",
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"",
"grubby --update-kernel=ALL --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"",
"mount -l | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,size=4096k,nr_inodes=1024,mode=755,inode64) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,misc) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)",
"ll /sys/fs/cgroup/ dr-xr-xr-x. 10 root root 0 Mar 16 09:34 blkio lrwxrwxrwx. 1 root root 11 Mar 16 09:34 cpu cpu,cpuacct lrwxrwxrwx. 1 root root 11 Mar 16 09:34 cpuacct cpu,cpuacct dr-xr-xr-x. 10 root root 0 Mar 16 09:34 cpu,cpuacct dr-xr-xr-x. 2 root root 0 Mar 16 09:34 cpuset dr-xr-xr-x. 10 root root 0 Mar 16 09:34 devices dr-xr-xr-x. 2 root root 0 Mar 16 09:34 freezer dr-xr-xr-x. 2 root root 0 Mar 16 09:34 hugetlb dr-xr-xr-x. 10 root root 0 Mar 16 09:34 memory dr-xr-xr-x. 2 root root 0 Mar 16 09:34 misc lrwxrwxrwx. 1 root root 16 Mar 16 09:34 net_cls net_cls,net_prio dr-xr-xr-x. 2 root root 0 Mar 16 09:34 net_cls,net_prio lrwxrwxrwx. 1 root root 16 Mar 16 09:34 net_prio net_cls,net_prio dr-xr-xr-x. 2 root root 0 Mar 16 09:34 perf_event dr-xr-xr-x. 10 root root 0 Mar 16 09:34 pids dr-xr-xr-x. 2 root root 0 Mar 16 09:34 rdma dr-xr-xr-x. 11 root root 0 Mar 16 09:34 systemd",
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"",
"top top - 11:34:09 up 11 min, 1 user, load average: 0.51, 0.27, 0.22 Tasks: 267 total, 3 running, 264 sleeping, 0 stopped, 0 zombie %Cpu(s): 49.0 us, 3.3 sy, 0.0 ni, 47.5 id, 0.0 wa, 0.2 hi, 0.0 si, 0.0 st MiB Mem : 1826.8 total, 303.4 free, 1046.8 used, 476.5 buff/cache MiB Swap: 1536.0 total, 1396.0 free, 140.0 used. 616.4 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6955 root 20 0 228440 1752 1472 R 99.3 0.1 0:32.71 sha1sum 5760 jdoe 20 0 3603868 205188 64196 S 3.7 11.0 0:17.19 gnome-shell 6448 jdoe 20 0 743648 30640 19488 S 0.7 1.6 0:02.73 gnome-terminal- 1 root 20 0 245300 6568 4116 S 0.3 0.4 0:01.87 systemd 505 root 20 0 0 0 0 I 0.3 0.0 0:00.75 kworker/u4:4-events_unbound",
"mkdir /sys/fs/cgroup/cpu/Example/",
"ll /sys/fs/cgroup/cpu/Example/ -rw-r- r--. 1 root root 0 Mar 11 11:42 cgroup.clone_children -rw-r- r--. 1 root root 0 Mar 11 11:42 cgroup.procs -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.stat -rw-r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_all -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu_sys -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu_user -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_sys -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_user -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.cfs_period_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.cfs_quota_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.rt_period_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.rt_runtime_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.shares -r- r- r--. 1 root root 0 Mar 11 11:42 cpu.stat -rw-r- r--. 1 root root 0 Mar 11 11:42 notify_on_release -rw-r- r--. 1 root root 0 Mar 11 11:42 tasks",
"echo \"1000000\" > /sys/fs/cgroup/cpu/Example/cpu.cfs_period_us echo \"200000\" > /sys/fs/cgroup/cpu/Example/cpu.cfs_quota_us",
"cat /sys/fs/cgroup/cpu/Example/cpu.cfs_period_us /sys/fs/cgroup/cpu/Example/cpu.cfs_quota_us 1000000 200000",
"echo \"6955\" > /sys/fs/cgroup/cpu/Example/cgroup.procs",
"cat /proc/6955/cgroup 12:cpuset:/ 11:hugetlb:/ 10:net_cls,net_prio:/ 9:memory:/user.slice/user-1000.slice/[email protected] 8:devices:/user.slice 7:blkio:/ 6:freezer:/ 5:rdma:/ 4:pids:/user.slice/user-1000.slice/[email protected] 3:perf_event:/ 2:cpu,cpuacct:/Example 1:name=systemd:/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service",
"top top - 12:28:42 up 1:06, 1 user, load average: 1.02, 1.02, 1.00 Tasks: 266 total, 6 running, 260 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.0 us, 1.2 sy, 0.0 ni, 87.5 id, 0.0 wa, 0.2 hi, 0.0 si, 0.2 st MiB Mem : 1826.8 total, 287.1 free, 1054.4 used, 485.3 buff/cache MiB Swap: 1536.0 total, 1396.7 free, 139.2 used. 608.3 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6955 root 20 0 228440 1752 1472 R 20.6 0.1 47:11.43 sha1sum 5760 jdoe 20 0 3604956 208832 65316 R 2.3 11.2 0:43.50 gnome-shell 6448 jdoe 20 0 743836 31736 19488 S 0.7 1.7 0:08.25 gnome-terminal- 505 root 20 0 0 0 0 I 0.3 0.0 0:03.39 kworker/u4:4-events_unbound 4217 root 20 0 74192 1612 1320 S 0.3 0.1 0:01.19 spice-vdagentd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/assembly_using-cgroupfs-to-manually-manage-cgroups_monitoring-and-managing-system-status-and-performance |
Role APIs | Role APIs OpenShift Container Platform 4.12 Reference guide for role APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/role_apis/index |
Distributed tracing | Distributed tracing OpenShift Container Platform 4.17 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/distributed_tracing/index |
Chapter 5. OpenShift Data Foundation upgrade overview | Chapter 5. OpenShift Data Foundation upgrade overview As an operator bundle managed by the Operator Lifecycle Manager (OLM), OpenShift Data Foundation leverages its operators to perform high-level tasks of installing and upgrading the product through ClusterServiceVersion (CSV) CRs. 5.1. Upgrade Workflows OpenShift Data Foundation recognizes two types of upgrades: Z-stream release upgrades and Minor Version release upgrades. While the user interface workflows for these two upgrade paths are not quite the same, the resulting behaviors are fairly similar. The distinctions are as follows: For Z-stream releases, OCS will publish a new bundle in the redhat-operators CatalogSource . The OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. The Subscription approval strategy, whether Automatic or Manual, will determine whether the OLM proceeds with reconciliation or waits for administrator approval. For Minor Version releases, OpenShift Container Storage will also publish a new bundle in the redhat-operators CatalogSource . The difference is that this bundle will be part of a new channel, and channel upgrades are not automatic. The administrator must explicitly select the new release channel. Once this is done, the OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. Since the channel switch is a manual operation, OLM will automatically start the reconciliation. From this point onwards, the upgrade processes are identical. 5.2. ClusterServiceVersion Reconciliation When the OLM detects an approved InstallPlan , it begins the process of reconciling the CSVs. Broadly, it does this by updating the operator resources based on the new spec, verifying the new CSV installs correctly, then deleting the old CSV. The upgrade process will push updates to the operator Deployments, which will trigger the restart of the operator Pods using the images specified in the new CSV. Note While it is possible to make changes to a given CSV and have those changes propagate to the relevant resource, when upgrading to a new CSV all custom changes will be lost, as the new CSV will be created based on its unaltered spec. 5.3. Operator Reconciliation At this point, the reconciliation of the OpenShift Data Foundation operands proceeds as defined in the OpenShift Data Foundation installation overview . The operators will ensure that all relevant resources exist in their expected configurations as specified in the user-facing resources (for example, StorageCluster ). | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_upgrade_overview |
Chapter 4. Certificate types and descriptions | Chapter 4. Certificate types and descriptions 4.1. User-provided certificates for the API server 4.1.1. Purpose The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain> . You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content. 4.1.2. Location The user-provided certificates must be provided in a kubernetes.io/tls type Secret in the openshift-config namespace. Update the API server cluster configuration, the apiserver/cluster resource, to enable the use of the user-provided certificate. 4.1.3. Management User-provided certificates are managed by the user. 4.1.4. Expiration API server client certificate expiration is less than five minutes. User-provided certificates are managed by the user. 4.1.5. Customization Update the secret containing the user-managed certificate as needed. Additional resources Adding API server certificates 4.2. Proxy certificates 4.2.1. Purpose Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Additional resources Configuring the cluster-wide proxy 4.2.2. Managing proxy certificates during installation The additionalTrustBundle value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example: USD cat install-config.yaml Example output ... proxy: httpProxy: http://<https://username:[email protected]:123/> httpsProxy: https://<https://username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 4.2.3. Location The user-provided trust bundle is represented as a config map. The config map is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the config map to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection. Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted. If using the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors . See Using shared system certificates in the Red Hat Enterprise Linux documentation for more information. 4.2.4. Expiration The user sets the expiration term of the user-provided trust bundle. The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by OpenShift Container Platform or RHCOS. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.2.5. Services By default, all platform components that make egress HTTPS calls will use the RHCOS trust bundle. If trustedCA is defined, it will also be used. Any service that is running on the RHCOS node is able to use the trust bundle of the node. 4.2.6. Management These certificates are managed by the system and not the user. 4.2.7. Customization Updating the user-provided trust bundle consists of either: updating the PEM-encoded certificates in the config map referenced by trustedCA, or creating a config map in the namespace openshift-config that contains the new trust bundle and updating trustedCA to reference the name of the new config map. The mechanism for writing CA certificates to the RHCOS trust bundle is exactly the same as writing any other file to RHCOS, which is done through the use of machine configs. When the Machine Config Operator (MCO) applies the new machine config that contains the new CA certificates, the node is rebooted. During the boot, the service coreos-update-ca-trust.service runs on the RHCOS nodes, which automatically update the trust bundle with the new CA certificates. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt The trust store of machines must also support updating the trust store of nodes. 4.2.8. Renewal There are no Operators that can auto-renew certificates on the RHCOS nodes. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.3. Service CA certificates 4.3.1. Purpose service-ca is an Operator that creates a self-signed CA when an OpenShift Container Platform cluster is deployed. 4.3.2. Expiration A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key in fields tls.crt (certificate(s)), tls.key (private key), and ca-bundle.crt (CA bundle). Other services can request a service serving certificate by annotating a service resource with service.beta.openshift.io/serving-cert-secret-name: <secret name> . In response, the Operator generates a new certificate, as tls.crt , and private key, as tls.key to the named secret. The certificate is valid for two years. Other services can request that the CA bundle for the service CA be injected into API service or config map resources by annotating with service.beta.openshift.io/inject-cabundle: true to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle field of an API service or as service-ca.crt to a config map. As of OpenShift Container Platform 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA. The service CA expiration of 26 months is longer than the expected upgrade interval for a supported OpenShift Container Platform cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. 4.3.3. Management These certificates are managed by the system and not the user. 4.3.4. Services Services that use service CA certificates include: cluster-autoscaler-operator cluster-monitoring-operator cluster-authentication-operator cluster-image-registry-operator cluster-ingress-operator cluster-kube-apiserver-operator cluster-kube-controller-manager-operator cluster-kube-scheduler-operator cluster-networking-operator cluster-openshift-apiserver-operator cluster-openshift-controller-manager-operator cluster-samples-operator machine-config-operator console-operator insights-operator machine-api-operator operator-lifecycle-manager This is not a comprehensive list. Additional resources Manually rotate service serving certificates Securing service traffic using service serving certificate secrets 4.4. Node certificates 4.4.1. Purpose Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. After the cluster is installed, the node certificates are auto-rotated. 4.4.2. Management These certificates are managed by the system and not the user. Additional resources Working with nodes 4.5. Bootstrap certificates 4.5.1. Purpose The kubelet, in OpenShift Container Platform 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR . In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages. 4.5.2. Management These certificates are managed by the system and not the user. 4.5.3. Expiration This bootstrap certificate is valid for 10 years. The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year. Note OpenShift Lifecycle Manager (OLM) does not update the bootstrap certificate. 4.5.4. Customization You cannot customize the bootstrap certificates. 4.6. etcd certificates 4.6.1. Purpose etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process. 4.6.2. Expiration The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years. 4.6.3. Management These certificates are only managed by the system and are automatically rotated. 4.6.4. Services etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd: Peer certificates: Used for communication between etcd members. Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets ( etcd-client , etcd-metric-client , etcd-metric-signer , and etcd-signer ) are added to the openshift-config , openshift-monitoring , and openshift-kube-apiserver namespaces. Server certificates: Used by the etcd server for authenticating client requests. Metric certificates: All metric consumers connect to proxy with metric-client certificates. Additional resources Restoring to a cluster state 4.7. OLM certificates 4.7.1. Management All certificates for OpenShift Lifecycle Manager (OLM) components ( olm-operator , catalog-operator , packageserver , and marketplace-operator ) are managed by the system. When installing Operators that include webhooks or API services in their ClusterServiceVersion (CSV) object, OLM creates and rotates the certificates for these resources. Certificates for resources in the openshift-operator-lifecycle-manager namespace are managed by OLM. OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. 4.8. Aggregated API client certificates 4.8.1. Purpose Aggregated API client certificates are used to authenticate the KubeAPIServer when connecting to the Aggregated API Servers. 4.8.2. Management These certificates are managed by the system and not the user. 4.8.3. Expiration This CA is valid for 30 days. The managed client certificates are valid for 30 days. CA and client certificates are rotated automatically through the use of controllers. 4.8.4. Customization You cannot customize the aggregated API server certificates. 4.9. Machine Config Operator certificates 4.9.1. Purpose Machine Config Operator certificates are used to secure connections between the Red Hat Enterprise Linux CoreOS (RHCOS) nodes and the Machine Config Server. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . 4.9.2. Management These certificates are managed by the system and not the user. 4.9.3. Expiration This CA is valid for 10 years. The issued serving certificates are valid for 10 years. 4.9.4. Customization You cannot customize the Machine Config Operator certificates. 4.10. User-provided certificates for default ingress 4.10.1. Purpose Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain> . The <cluster_name> and <base_domain> come from the installation config file. <route_name> is the host field of the route, if specified, or the route name. For example, hello-openshift-default.apps.username.devcluster.openshift.com . hello-openshift is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters. 4.10.2. Location The user-provided certificates must be provided in a tls type Secret resource in the openshift-ingress namespace. Update the IngressController CR in the openshift-ingress-operator namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate . 4.10.3. Management User-provided certificates are managed by the user. 4.10.4. Expiration User-provided certificates are managed by the user. 4.10.5. Services Applications deployed on the cluster use user-provided certificates for default ingress. 4.10.6. Customization Update the secret containing the user-managed certificate as needed. Additional resources Replacing the default ingress certificate 4.11. Ingress certificates 4.11.1. Purpose The Ingress Operator uses certificates for: Securing access to metrics for Prometheus. Securing access to routes. 4.11.2. Location To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca controller for its own metrics, and the service-ca controller puts the certificate in a secret named metrics-tls in the openshift-ingress-operator namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca controller puts the certificate in a secret named router-metrics-certs-<name> , where <name> is the name of the Ingress Controller, in the openshift-ingress namespace. Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca in the openshift-ingress-operator namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name> (where <name> is the name of the Ingress Controller) in the openshift-ingress namespace. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. 4.11.3. Workflow Figure 4.1. Custom certificate workflow Figure 4.2. Default certificate workflow An empty defaultCertificate field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain. The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates. In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate. The router deployment. Uses the certificate in secrets/router-certs-default as its default front-end server certificate. In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate. The public (certificate) part of the default serving certificate. Replaces the configmaps/router-ca resource. The user updates the cluster proxy configuration with the CA certificate that signed the ingresscontroller serving certificate. This enables components like auth , console , and the registry to trust the serving certificate. The cluster-wide trusted CA bundle containing the combined Red Hat Enterprise Linux CoreOS (RHCOS) and user-provided CA bundles or an RHCOS-only bundle if a user bundle is not provided. The custom CA certificate bundle, which instructs other components (for example, auth and console ) to trust an ingresscontroller configured with a custom certificate. The trustedCA field is used to reference the user-provided CA bundle. The Cluster Network Operator injects the trusted CA bundle into the proxy-ca config map. OpenShift Container Platform 4.10 and newer use default-ingress-cert . 4.11.4. Expiration The expiration terms for the Ingress Operator's certificates are as follows: The expiration date for metrics certificates that the service-ca controller creates is two years after the date of creation. The expiration date for the Operator's signing certificate is two years after the date of creation. The expiration date for default certificates that the Operator generates is two years after the date of creation. You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca controller creates. You cannot specify expiration terms when installing OpenShift Container Platform for certificates that the Ingress Operator or service-ca controller creates. 4.11.5. Services Prometheus uses the certificates that secure metrics. The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates. Cluster components that use secured routes may use the default Ingress Controller's default certificate. Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate. 4.11.6. Management Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information. 4.11.7. Renewal The service-ca controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret> to manually rotate service serving certificates. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. 4.12. Monitoring and OpenShift Logging Operator component certificates 4.12.1. Expiration Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months. If the certificate lives in the openshift-monitoring or openshift-logging namespace, it is system managed and rotated automatically. 4.12.2. Management These certificates are managed by the system and not the user. 4.13. Control plane certificates 4.13.1. Location Control plane certificates are included in these namespaces: openshift-config-managed openshift-kube-apiserver openshift-kube-apiserver-operator openshift-kube-controller-manager openshift-kube-controller-manager-operator openshift-kube-scheduler 4.13.2. Management Control plane certificates are managed by the system and rotated automatically. In the rare case that your control plane certificates have expired, see Recovering from expired control plane certificates . | [
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"cat install-config.yaml",
"proxy: httpProxy: http://<https://username:[email protected]:123/> httpsProxy: https://<https://username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/certificate-types-and-descriptions |
Chapter 7. Supplemental certification | Chapter 7. Supplemental certification Open supplemental certifications in the following scenarios: First time certification The bare-metal supplemental certification can be automatically created during a different certification process, for example, when you applied for the Red Hat Enterprise Linux System certification. If it was not automatically created, or if you need to apply for the certification at a later date, open a new supplemental certification above the Red Hat OpenShift Container Platform certification. Recertification Open a supplemental certification to update an existing RHOCP bare-metal certification. You may want to recertify your product, for example, to certify the same system on different versions of a Red Hat platform or because your product has received a significant update. You are responsible for initiating these certifications and notifying Red Hat of any material changes to your product. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-supplemental-certification_rhosp-bm-pol-pass-through-cert |
Chapter 1. About automation services catalog | Chapter 1. About automation services catalog Automation services catalog is a service within the Red Hat Ansible Automation Platform. Automation services catalog enables customers to organize and govern product catalog sources on Ansible Tower across various environments. Using automation services catalog you can: Apply multi-level approval to individual platform inventories Organize content in the form of products from your platforms into portfolios Choose portfolios to share with specific groups of users Set boundaries around values driving execution of user requests | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/about_automation_services_catalog |
Chapter 8. Installing a private cluster on Azure | Chapter 8. Installing a private cluster on Azure In OpenShift Container Platform version 4.14, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.14, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 8.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.7. Manually creating the installation configuration file When installing a private OpenShift Container Platform cluster, you must manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.7.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.7.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 8.7.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 8.7.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 8.7.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 8.7.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 8.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.9.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 8.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 8.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.9.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 8.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-private |
Appendix A. GFS2 Performance Analysis with Performance Co-Pilot | Appendix A. GFS2 Performance Analysis with Performance Co-Pilot Red Hat Enterprise Linux 7 supports Performance Co-Pilot (PCP) with GFS2 performance metrics. This allows you to monitor the performance of a GFS2 file system. This appendix describes the GFS2 performance metrics and how to use them. A.1. Overview of Performance Co-Pilot Performance Co-Pilot (PCP) is a open source toolkit for monitoring, visualizing, recording, and controlling the status, activities and performance of computers, applications and servers. PCP allows the monitoring and management of both real-time data and the logging and retrieval of historical data. Historical data can be used to analyze any patterns with issues by comparing live results over the archived data. PCP is designed with a client-server architecture. The PCP collector service is the Performance Metric Collector Daemon (PMCD), which can be installed and run on a server. Once it is started, PCMD begins collecting performance data from the installed Performance Metric Domain Agents (PMDAs). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host. The GFS2 PMDA, which is part of the default PCP installation, is used. to gather performance metric data of GFS2 file systems in PCP. Table A.1, "PCP Tools" provides a brief list of some PCP tools in the PCP Toolkit that this chapter describes. For information about additional PCP tools, see the PCPIntro (1) man page and the additional PCP man pages. Table A.1. PCP Tools Tool Use pmcd Performance Metric Collector Service: collects the metric data from the PMDA and makes the metric data available for the other components in PCP pmlogger Allows the creation of archive logs of performance metric values which may be played back by other PCP tools pmproxy A protocol proxy for pmcd which allows PCP monitoring clients to connect to one or more instances of pmcd by means of pmproxy pminfo Displays information about performance metrics on the command line pmstore Allows the modification of performance metric values (re-initialize counters or assign new values) pmdumptext Exports performance metric data either live or from performance archives to an ASCII table pmchart Graphical utility that plots performance metric values into charts ( pcp-gui package) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/gfs2_pcp |
Chapter 59. Security | Chapter 59. Security scap-security-guide example kickstart files for Red Hat Enterprise Linux 6 are not recommended for use The Red Hat Enterprise Linux 6 example kickstart files, which are included in the scap-security-guide package for Red Hat Enterprise Linux 7, install the latest version of the scap-security-guide package directly from the upstream repository, which means that this version has not been checked by the Red Hat Quality Engineering team. To work around this problem, use the corrected Red Hat Enterprise Linux 6 example kickstart files from the scap-security-guide package that is included in the current Red Hat Enterprise Linux 6 release, or alternatively, manually change the %post section in the kickstart file. Note that the Red Hat Enterprise Linux 7 example kickstart files are not affected by this problem. (BZ# 1378489 ) The openscap packages do not install atomic as a dependency The OpenSCAP suite enables integration of the Security Content Automation Protocol (SCAP) line of standards. The current version adds the ability to scan containers using the atomic scan and oscap-docker commands. However, when you install only the openscap , openscap-utils , and openscap-scanner packages, the atomic package is not installed by default. As a consequence, any container scan command fails with an error message. To work around this problem, install the atomic package by running the yum install atomic command as root. (BZ#1356547) CIL does not have a separate module statement The new SELinux userspace uses SELinux Common Intermediate Language (CIL) in the module store. CIL treats files as modules and does not have a separate module statement, the module is named after the file name. As a consequence, this can cause confusion when a policy module has a name that is not the same as its base filename, and the semodule -l command does not show the module version. Additionaly, semodule -l does not show disabled modules. To work around this problem, list all modules using the semodule --l=full command. (BZ#1345825) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/known_issues_security |
Logging | Logging OpenShift Container Platform 4.10 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/index |
Chapter 309. Slack Component | Chapter 309. Slack Component Available as of Camel version 2.16 The slack component allows you to connect to an instance of Slack and delivers a message contained in the message body via a pre established Slack incoming webhook . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-slack</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 309.1. URI format To send a message to a channel. slack:#channel[?options] To send a direct message to a slackuser. slack:@username[?options] 309.2. Options The Slack component supports 2 options, which are listed below. Name Description Default Type webhookUrl (common) The incoming webhook URL String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Slack endpoint is configured using URI syntax: with the following path and query parameters: 309.2.1. Path Parameters (1 parameters): Name Description Default Type channel Required The channel name (syntax #name) or slackuser (syntax userName) to send a message directly to an user. String 309.2.2. Query Parameters (26 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean maxResults (consumer) The Max Result for the poll 10 String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean serverUrl (consumer) The Server URL of the Slack instance https://slack.com String token (consumer) The token to use String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy iconEmoji (producer) Use a Slack emoji as an avatar String iconUrl (producer) The avatar that the component will use when sending message to a channel or user. String username (producer) This is the username that the bot will have when sending messages to a channel or user. String webhookUrl (producer) The incoming webhook URL String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 309.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.slack.enabled Enable slack component true Boolean camel.component.slack.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.slack.webhook-url The incoming webhook URL String 309.4. SlackComponent The SlackComponent with XML must be configured as a Spring or Blueprint bean that contains the incoming webhook url for the integration as a parameter. <bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> </bean> For Java you can configure this using Java code. 309.5. Example A CamelContext with Blueprint could be as: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" default-activation="lazy"> <bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:test"/> <to uri="slack:#channel?iconEmoji=:camel:&username=CamelTest"/> </route> </camelContext> </blueprint> 309.6. Consumer You can use also a consumer for messages in channel from("slack://general?token=RAW(<YOUR_TOKEN>)&maxResults=1") .to("mock:result"); In this way you'll get the last message from general channel. The consumer will take track of the timestamp of the last message consumed and in the poll it will check from that timestamp. 309.7. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-slack</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"slack:#channel[?options]",
"slack:@username[?options]",
"slack:channel",
"<bean id=\"slack\" class=\"org.apache.camel.component.slack.SlackComponent\"> <property name=\"webhookUrl\" value=\"https://hooks.slack.com/services/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\"/> </bean>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" default-activation=\"lazy\"> <bean id=\"slack\" class=\"org.apache.camel.component.slack.SlackComponent\"> <property name=\"webhookUrl\" value=\"https://hooks.slack.com/services/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\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:test\"/> <to uri=\"slack:#channel?iconEmoji=:camel:&username=CamelTest\"/> </route> </camelContext> </blueprint>",
"from(\"slack://general?token=RAW(<YOUR_TOKEN>)&maxResults=1\") .to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/slack-component |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_google_cloud/making-open-source-more-inclusive |
3.5.2. Disabling ACPI Soft-Off with chkconfig Management | 3.5.2. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-acpi-disable-chkconfig-CA |
Chapter 8. Configuring image streams and image registries | Chapter 8. Configuring image streams and image registries You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. For more information, see Using image pull secrets . For information about images and configuring image streams or image registries, see the following documentation: Overview of images Image Registry Operator in OpenShift Container Platform Configuring image registry settings 8.1. Configuring image streams for a disconnected cluster After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather image stream. 8.1.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 8.1.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 8.1.3. Preparing your cluster to gather support data Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository. Procedure If you have not added your mirror registry's trusted CA to your cluster's image configuration object as part of the Cluster Samples Operator configuration, perform the following steps: Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Import the default must-gather image from your installation payload: USD oc import-image is/must-gather -n openshift When running the oc adm must-gather command, use the --image flag and point to the payload image, as in the following example: USD oc adm must-gather --image=USD(oc adm release info --image-for must-gather) 8.2. Configuring periodic importing of Cluster Sample Operator image stream tags You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available. Procedure Fetch all the imagestreams in the openshift namespace by running the following command: oc get imagestreams -nopenshift Fetch the tags for every imagestream in the openshift namespace by running the following command: USD oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift For example: USD oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift Example output 1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12 Schedule periodic importing of images for each tag present in the image stream by running the following command: USD oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift For example: USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift USD oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Verify the scheduling status of the periodic import by running the following command: oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift For example: oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift Example output Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true | [
"oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io",
"oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=USD(oc adm release info --image-for must-gather)",
"get imagestreams -nopenshift",
"oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12",
"oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift",
"oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift",
"get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/post-install-image-config |
Chapter 5. Optional: Enabling disk encryption | Chapter 5. Optional: Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the UI, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts | [
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq",
"tang-show-keys <port>",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"sudo dnf install jose",
"sudo jose jwk thp -i /var/db/tang/<public_key>.jwk",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/assembly_enabling-disk-encryption |
5.16. Configuring Firewall Lockdown | 5.16. Configuring Firewall Lockdown Local applications or services are able to change the firewall configuration if they are running as root (for example, libvirt ). With this feature, the administrator can lock the firewall configuration so that either no applications or only applications that are added to the lockdown whitelist are able to request firewall changes. The lockdown settings default to disabled. If enabled, the user can be sure that there are no unwanted configuration changes made to the firewall by local applications or services. 5.16.1. Configuring Lockdown with the Command-Line Client To query whether lockdown is enabled, use the following command as root : The command prints yes with exit status 0 if lockdown is enabled. It prints no with exit status 1 otherwise. To enable lockdown, enter the following command as root : To disable lockdown, use the following command as root : 5.16.2. Configuring Lockdown Whitelist Options with the Command-Line Client The lockdown whitelist can contain commands, security contexts, users and user IDs. If a command entry on the whitelist ends with an asterisk " * " , then all command lines starting with that command will match. If the " * " is not there then the absolute command including arguments must match. The context is the security (SELinux) context of a running application or service. To get the context of a running application use the following command: That command returns all running applications. Pipe the output through the grep tool to get the application of interest. For example: To list all command lines that are on the whitelist, enter the following command as root : To add a command command to the whitelist, enter the following command as root : To remove a command command from the whitelist, enter the following command as root : To query whether the command command is on the whitelist, enter the following command as root : The command prints yes with exit status 0 if true. It prints no with exit status 1 otherwise. To list all security contexts that are on the whitelist, enter the following command as root : To add a context context to the whitelist, enter the following command as root : To remove a context context from the whitelist, enter the following command as root : To query whether the context context is on the whitelist, enter the following command as root : Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. To list all user IDs that are on the whitelist, enter the following command as root : To add a user ID uid to the whitelist, enter the following command as root : To remove a user ID uid from the whitelist, enter the following command as root : To query whether the user ID uid is on the whitelist, enter the following command: Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. To list all user names that are on the whitelist, enter the following command as root : To add a user name user to the whitelist, enter the following command as root : To remove a user name user from the whitelist, enter the following command as root : To query whether the user name user is on the whitelist, enter the following command: Prints yes with exit status 0 , if true, prints no with exit status 1 otherwise. 5.16.3. Configuring Lockdown Whitelist Options with Configuration Files The default whitelist configuration file contains the NetworkManager context and the default context of libvirt . The user ID 0 is also on the list. Following is an example whitelist configuration file enabling all commands for the firewall-cmd utility, for a user called user whose user ID is 815 : This example shows both user id and user name , but only one option is required. Python is the interpreter and is prepended to the command line. You can also use a specific command, for example: /usr/bin/python /bin/firewall-cmd --lockdown-on In that example, only the --lockdown-on command is allowed. Note In Red Hat Enterprise Linux 7, all utilities are placed in the /usr/bin/ directory and the /bin/ directory is sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cmd when run as root might resolve to /bin/firewall-cmd , /usr/bin/firewall-cmd can now be used. All new scripts should use the new location. But be aware that if scripts that run as root have been written to use the /bin/firewall-cmd path, then that command path must be whitelisted in addition to the /usr/bin/firewall-cmd path traditionally used only for non- root users. The " * " at the end of the name attribute of a command means that all commands that start with this string will match. If the " * " is not there then the absolute command including arguments must match. | [
"~]# firewall-cmd --query-lockdown",
"~]# firewall-cmd --lockdown-on",
"~]# firewall-cmd --lockdown-off",
"~]USD ps -e --context",
"~]USD ps -e --context | grep example_program",
"~]# firewall-cmd --list-lockdown-whitelist-commands",
"~]# firewall-cmd --add-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '",
"~]# firewall-cmd --remove-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '",
"~]# firewall-cmd --query-lockdown-whitelist-command='/usr/bin/python -Es /usr/bin/ command '",
"~]# firewall-cmd --list-lockdown-whitelist-contexts",
"~]# firewall-cmd --add-lockdown-whitelist-context= context",
"~]# firewall-cmd --remove-lockdown-whitelist-context= context",
"~]# firewall-cmd --query-lockdown-whitelist-context= context",
"~]# firewall-cmd --list-lockdown-whitelist-uids",
"~]# firewall-cmd --add-lockdown-whitelist-uid= uid",
"~]# firewall-cmd --remove-lockdown-whitelist-uid= uid",
"~]USD firewall-cmd --query-lockdown-whitelist-uid= uid",
"~]# firewall-cmd --list-lockdown-whitelist-users",
"~]# firewall-cmd --add-lockdown-whitelist-user= user",
"~]# firewall-cmd --remove-lockdown-whitelist-user= user",
"~]USD firewall-cmd --query-lockdown-whitelist-user= user",
"<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>",
"<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python -Es /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/configuring_firewall_lockdown |
3.8. Obtaining Information about Control Groups | 3.8. Obtaining Information about Control Groups The libcgroup-tools package contains several utilities for obtaining information about controllers, control groups, and their parameters. Listing Controllers To find the controllers that are available in your kernel and information on how they are mounted together to hierarchies, execute: Alternatively, to find the mount points of particular subsystems, execute the following command: Here controllers stands for a list of the subsystems in which you are interested. Note that the lssubsys -m command returns only the top-level mount point per each hierarchy. Finding Control Groups To list the cgroups on a system, execute as root : To restrict the output to a specific hierarchy, specify a controller and a path in the format controller : path . For example: The above command lists only subgroups of the adminusers cgroup in the hierarchy to which the cpuset controller is attached. Displaying Parameters of Control Groups To display the parameters of specific cgroups, run: where parameter is a pseudo-file that contains values for a controller, and list_of_cgroups is a list of cgroups separated with spaces. If you do not know the names of the actual parameters, use a command similar to: | [
"~]USD cat /proc/cgroups",
"~]USD lssubsys -m controllers",
"~]# lscgroup",
"~]USD lscgroup cpuset:adminusers",
"~]USD cgget -r parameter list_of_cgroups",
"~]USD cgget -g cpuset /"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-Obtaining_Information_About_Control_Groups-libcgroup |
Chapter 3. Configuring debug verbosity of Ceph components | Chapter 3. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Prerequisites Ensure that the OpenShift Data Foundation cluster is installed. Ensure that the OpenShift Data Foundation command-line interface tool is installed. Clone the following repository: Change the directory and build the binary. Use the binary present in the/bin/ directory to run the commands: Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example, | [
"git clone https://github.com/red-hat-storage/odf-cli.git",
"cd odf-cli/ && make build",
"./bin/odf -h",
"odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>",
"odf set ceph log-level osd crush 20",
"odf set ceph log-level mds crush 20",
"odf set ceph log-level mon crush 20"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/configuring-debug-verbosity-of-ceph-components_rhodf |
Extension APIs | Extension APIs OpenShift Container Platform 4.15 Reference guide for extension APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/extension_apis/index |
Chapter 224. MIME Multipart DataFormat | Chapter 224. MIME Multipart DataFormat Available as of Camel version 2.17 This data format that can convert a Camel message with attachments into a Camel message having a MIME-Multipart message as message body (and no attachments). The use case for this is to enable the user to send attachments over endpoints that do not directly support attachments, either as special protocol implementation (e.g. send a MIME-multipart over an HTTP endpoint) or as a kind of tunneling solution (e.g. because camel-jms does not support attachments but by marshalling the message with attachments into a MIME-Multipart, sending that to a JMS queue, receiving the message from the JMS queue and unmarshalling it again (into a message body with attachments). The marshal option of the mime-multipart data format will convert a message with attachments into a MIME-Multipart message. If the parameter "multipartWithoutAttachment" is set to true it will also marshal messages without attachments into a multipart message with a single part, if the parameter is set to false it will leave the message alone. MIME headers of the mulitpart as "MIME-Version" and "Content-Type" are set as camel headers to the message. If the parameter "headersInline" is set to true it will also create a MIME multipart message in any case. Furthermore the MIME headers of the multipart are written as part of the message body, not as camel headers. The unmarshal option of the mime-multipart data format will convert a MIME-Multipart message into a camel message with attachments and leaves other messages alone. MIME-Headers of the MIME-Multipart message have to be set as Camel headers. The unmarshalling will only take place if the "Content-Type" header is set to a "multipart" type. If the option "headersInline" is set to true, the body is always parsed as a MIME message.As a consequence if the message body is a stream and stream caching is not enabled, a message body that is actually not a MIME message with MIME headers in the message body will be replaced by an empty message. Up to Camel version 2.17.1 this will happen all message bodies that do not contain a MIME multipart message regardless of body type and stream cache setting. 224.1. Options The MIME Multipart dataformat supports 6 options, which are listed below. Name Default Java Type Description multipartSubType mixed String Specify the subtype of the MIME Multipart. Default is mixed. multipartWithoutAttachment false Boolean Defines whether a message without attachment is also marshaled into a MIME Multipart (with only one body part). Default is false. headersInline false Boolean Defines whether the MIME-Multipart headers are part of the message body (true) or are set as Camel headers (false). Default is false. includeHeaders String A regex that defines which Camel headers are also included as MIME headers into the MIME multipart. This will only work if headersInline is set to true. Default is to include no headers binaryContent false Boolean Defines whether the content of binary parts in the MIME multipart is binary (true) or Base-64 encoded (false) Default is false. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 224.2. Message Headers (marshal) Name Type Description Message-Id String The marshal operation will set this parameter to the generated MIME message id if the "headersInline" parameter is set to false. MIME-Version String The marshal operation will set this parameter to the applied MIME version (1.0) if the "headersInline" parameter is set to false. Content-Type String The content of this header will be used as a content type for the message body part. If no content type is set, "application/octet-stream" is assumed. After the marshal operation the content type is set to "multipart/related" or empty if the "headersInline" parameter is set to true. Content-Encoding String If the incoming content type is "text/*" the content encoding will be set to the encoding parameter of the Content-Type MIME header of the body part. Furthermore the given charset is applied for text to binary conversions. 224.3. Message Headers (unmarshal) Name Type Description Content-Type String If this header is not set to "multipart/*" the unmarshal operation will not do anything. In other cases the multipart will be parsed into a camel message with attachments and the header is set to the Content-Type header of the body part, except if this is application/octet-stream. In the latter case the header is removed. Content-Encoding String If the content-type of the body part contains an encoding parameter this header will be set to the value of this encoding parameter (converted from MIME endoding descriptor to Java encoding descriptor) MIME-Version String The unmarshal operation will read this header and use it for parsing the MIME multipart. The header is removed afterwards 224.4. Examples from(...).marshal().mimeMultipart() With a message where no Content-Type header is set, will create a Message with the following message Camel headers: Camel Message Headers Camel Message Body A message with the header Content-Type set to "text/plain" sent to the route from("...").marshal().mimeMultipart("related", true, true, "(included|x-.*)", true); will create a message without any specific MIME headers set as Camel headers (the Content-Type header is removed from the Camel message) and the following message body that includes also all headers of the original message starting with "x-" and the header with name "included": Camel Message Body 224.5. Dependencies To use MIME-Multipart in your Camel routes you need to add a dependency on camel-mail which implements this data format. If you use Maven you can just add the following to your pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mail</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> | [
"from(...).marshal().mimeMultipart()",
"Content-Type=multipart/mixed; \\n boundary=\"----=_Part_0_14180567.1447658227051\" Message-Id=<...> MIME-Version=1.0",
"The message body will be:",
"------=_Part_0_14180567.1447658227051 Content-Type: application/octet-stream Content-Transfer-Encoding: base64 Qm9keSB0ZXh0 ------=_Part_0_14180567.1447658227051 Content-Type: application/binary Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=\"Attachment File Name\" AAECAwQFBgc= ------=_Part_0_14180567.1447658227051--",
"from(\"...\").marshal().mimeMultipart(\"related\", true, true, \"(included|x-.*)\", true);",
"Message-ID: <...> MIME-Version: 1.0 Content-Type: multipart/related; boundary=\"----=_Part_0_1134128170.1447659361365\" x-bar: also there included: must be included x-foo: any value ------=_Part_0_1134128170.1447659361365 Content-Type: text/plain Content-Transfer-Encoding: 8bit Body text ------=_Part_0_1134128170.1447659361365 Content-Type: application/binary Content-Transfer-Encoding: binary Content-Disposition: attachment; filename=\"Attachment File Name\" [binary content] ------=_Part_0_1134128170.1447659361365",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mail</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mime-multipart-dataformat |
Chapter 11. Uninstalling a cluster on IBM Cloud | Chapter 11. Uninstalling a cluster on IBM Cloud You can remove a cluster that you deployed to IBM Cloud(R). 11.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IC_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IC_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_cloud/uninstalling-cluster-ibm-cloud |
Chapter 3. Preparing for Red Hat Quay (high availability) | Chapter 3. Preparing for Red Hat Quay (high availability) Note This procedure presents guidance on how to set up a highly available, production-quality deployment of Red Hat Quay. 3.1. Prerequisites Here are a few things you need to know before you begin the Red Hat Quay high availability deployment: Either Postgres or MySQL can be used to provide the database service. Postgres was chosen here as the database because it includes the features needed to support Clair security scanning. Other options include: Crunchy Data PostgreSQL Operator: Although not supported directly by Red Hat, the CrunchDB Operator is available from Crunchy Data for use with Red Hat Quay. If you take this route, you should have a support contract with Crunchy Data and work directly with them for usage guidance or issues relating to the operator and their database. If your organization already has a high-availability (HA) database, you can use that database with Red Hat Quay. See the Red Hat Quay Support Policy for details on support for third-party databases and other components. Ceph Object Gateway (also called RADOS Gateway) is one example of a product that can provide the object storage needed by Red Hat Quay. If you want your Red Hat Quay setup to do geo-replication, Ceph Object Gateway or other supported object storage is required. For cloud installations, you can use any of the following cloud object storage: Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Quay) Azure Blob Storage Google Cloud Storage Ceph Object Gateway OpenStack Swift CloudFront + S3 NooBaa S3 Storage The haproxy server is used in this example, although you can use any proxy service that works for your environment. Number of systems: This procedure uses seven systems (physical or virtual) that are assigned with the following tasks: A: db01: Load balancer and database : Runs the haproxy load balancer and a Postgres database. Note that these components are not themselves highly available, but are used to indicate how you might set up your own load balancer or production database. B: quay01, quay02, quay03: Quay and Redis : Three (or more) systems are assigned to run the Quay and Redis services. C: ceph01, ceph02, ceph03, ceph04, ceph05: Ceph : Three (or more) systems provide the Ceph service, for storage. If you are deploying to a cloud, you can use the cloud storage features described earlier. This procedure employs an additional system for Ansible (ceph05) and one for a Ceph Object Gateway (ceph04). Each system should have the following attributes: Red Hat Enterprise Linux (RHEL) 8: Obtain the latest Red Hat Enterprise Linux 8 server media from the Downloads page and follow the installation instructions available in the Product Documentation for Red Hat Enterprise Linux 9 . Valid Red Hat Subscription : Configure a valid Red Hat Enterprise Linux 8 server subscription. CPUs : Two or more virtual CPUs RAM : 4GB for each A and B system; 8GB for each C system Disk space : About 20GB of disk space for each A and B system (10GB for the operating system and 10GB for docker storage). At least 30GB of disk space for C systems (or more depending on required container storage). 3.2. Using podman This document uses podman for creating and deploying containers. If you do not have podman available on your system, you should be able to use the equivalent docker commands. For more information on podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 8 . Note Podman is strongly recommended for highly available, production quality deployments of Red Hat Quay. Docker has not been tested with Red Hat Quay 3.10, and will be deprecated in a future release. 3.3. Setting up the HAProxy load balancer and the PostgreSQL database Use the following procedure to set up the HAProxy load balancer and the PostgreSQL database. Prerequisites You have installed the Podman or Docker CLI. Procedure On the first two systems, q01 and q02 , install the HAProxy load balancer and the PostgreSQL database. This configures HAProxy as the access point and load balancer for the following services running on other systems: Red Hat Quay (ports 80 and 443 on B systems) Redis (port 6379 on B systems) RADOS (port 7480 on C systems) Open all HAProxy ports in SELinux and selected HAProxy ports in the firewall: # setsebool -P haproxy_connect_any=on # firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp success # firewall-cmd --reload success Configure the /etc/haproxy/haproxy.cfg to point to the systems and ports providing the Red Hat Quay, Redis and Ceph RADOS services. The following are examples of defaults and added frontend and backend settings: After the new haproxy.cfg file is in place, restart the HAProxy service by entering the following command: # systemctl restart haproxy Create a folder for the PostgreSQL database by entering the following command: USD mkdir -p /var/lib/pgsql/data Set the following permissions for the /var/lib/pgsql/data folder: USD chmod 777 /var/lib/pgsql/data Enter the following command to start the PostgreSQL database: USD sudo podman run -d --name postgresql_database \ -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z \ -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass \ -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 \ registry.redhat.io/rhel8/postgresql-13:1-109 Note Data from the container will be stored on the host system in the /var/lib/pgsql/data directory. List the available extensions by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_available_extensions" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL ... Create the pg_trgm extension by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm;" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb' Confirm that the pg_trgm has been created by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_extension" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows) Alter the privileges of the Postgres user quayuser and grant them the superuser role to give the user unrestricted access to the database: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "ALTER USER quayuser WITH SUPERUSER;" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output ALTER ROLE If you have a firewalld service active on your system, run the following commands to make the PostgreSQL port available through the firewall: # firewall-cmd --permanent --zone=trusted --add-port=5432/tcp # firewall-cmd --reload Optional. If you do not have the postgres CLI package installed, install it by entering the following command: # yum install postgresql -y Use the psql command to test connectivity to the PostgreSQL database. Note To verify that you can access the service remotely, run the following command on a remote system. Example output Password for user test: psql (9.2.23, server 9.6.5) WARNING: psql version 9.2, server version 9.6. Some psql features might not work. Type "help" for help. test=> \q 3.4. Set Up Ceph For this Red Hat Quay configuration, we create a three-node Ceph cluster, with several other supporting nodes, as follows: ceph01, ceph02, and ceph03 - Ceph Monitor, Ceph Manager and Ceph OSD nodes ceph04 - Ceph RGW node ceph05 - Ceph Ansible administration node For details on installing Ceph nodes, see Installing Red Hat Ceph Storage on Red Hat Enterprise Linux . Once you have set up the Ceph storage cluster, create a Ceph Object Gateway (also referred to as a RADOS gateway). See Installing the Ceph Object Gateway for details. 3.4.1. Install each Ceph node On ceph01, ceph02, ceph03, ceph04, and ceph05, do the following: Review prerequisites for setting up Ceph nodes in Requirements for Installing Red Hat Ceph Storage . In particular: Decide if you want to use RAID controllers on OSD nodes . Decide if you want a separate cluster network for your Ceph Network Configuration . Prepare OSD storage (ceph01, ceph02, and ceph03 only). Set up the OSD storage on the three OSD nodes (ceph01, ceph02, and ceph03). See OSD Ansible Settings in Table 3.2 for details on supported storage types that you will enter into your Ansible configuration later. For this example, a single, unformatted block device ( /dev/sdb ), that is separate from the operating system, is configured on each of the OSD nodes. If you are installing on metal, you might want to add an extra hard drive to the machine for this purpose. Install Red Hat Enterprise Linux Server edition, as described in the RHEL 7 Installation Guide . Register and subscribe each Ceph node as described in the Registering Red Hat Ceph Storage Nodes . Here is how to subscribe to the necessary repos: Create an ansible user with root privilege on each node. Choose any name you like. For example: 3.4.2. Configure the Ceph Ansible node (ceph05) Log into the Ceph Ansible node (ceph05) and configure it as follows. You will need the ceph01, ceph02, and ceph03 nodes to be running to complete these steps. In the Ansible user's home directory create a directory to store temporary values created from the ceph-ansible playbook Enable password-less ssh for the ansible user. Run ssh-keygen on ceph05 (leave passphrase empty), then run and repeat ssh-copy-id to copy the public key to the Ansible user on ceph01, ceph02, and ceph03 systems: Install the ceph-ansible package: Create a symbolic between these two directories: Create copies of Ceph sample yml files to modify: Edit the copied group_vars/all.yml file. See General Ansible Settings in Table 3.1 for details. For example: Note that your network device and address range may differ. Edit the copied group_vars/osds.yml file. See the OSD Ansible Settings in Table 3.2 for details. In this example, the second disk device ( /dev/sdb ) on each OSD node is used for both data and journal storage: Edit the /etc/ansible/hosts inventory file to identify the Ceph nodes as Ceph monitor, OSD and manager nodes. In this example, the storage devices are identified on each node as well: Add this line to the /etc/ansible/ansible.cfg file, to save the output from each Ansible playbook run into your Ansible user's home directory: Check that Ansible can reach all the Ceph nodes you configured as your Ansible user: Run the ceph-ansible playbook (as your Ansible user): At this point, the Ansible playbook will check your Ceph nodes and configure them for the services you requested. If anything fails, make needed corrections and rerun the command. Log into one of the three Ceph nodes (ceph01, ceph02, or ceph03) and check the health of the Ceph cluster: On the same node, verify that monitoring is working using rados: 3.4.3. Install the Ceph Object Gateway On the Ansible system (ceph05), configure a Ceph Object Gateway to your Ceph Storage cluster (which will ultimately run on ceph04). See Installing the Ceph Object Gateway for details. 3.5. Set up Redis With Red Hat Enterprise Linux 8 server installed on each of the three Red Hat Quay systems (quay01, quay02, and quay03), install and start the Redis service as follows: Install / Deploy Redis : Run Redis as a container on each of the three quay0* systems: Check redis connectivity : You can use the telnet command to test connectivity to the redis service. Type MONITOR (to begin monitoring the service) and QUIT to exit: Note For more information on using podman and restarting containers, see the section "Using podman" earlier in this document. | [
"setsebool -P haproxy_connect_any=on firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp success firewall-cmd --reload success",
"#--------------------------------------------------------------------- common defaults that all the 'listen' and 'backend' sections will use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- main frontend which proxys to the backends #--------------------------------------------------------------------- frontend fe_http *:80 default_backend be_http frontend fe_https *:443 default_backend be_https frontend fe_redis *:6379 default_backend be_redis frontend fe_rdgw *:7480 default_backend be_rdgw backend be_http balance roundrobin server quay01 quay01:80 check server quay02 quay02:80 check server quay03 quay03:80 check backend be_https balance roundrobin server quay01 quay01:443 check server quay02 quay02:443 check server quay03 quay03:443 check backend be_rdgw balance roundrobin server ceph01 ceph01:7480 check server ceph02 ceph02:7480 check server ceph03 ceph03:7480 check backend be_redis server quay01 quay01:6379 check inter 1s server quay02 quay02:6379 check inter 1s server quay03 quay03:6379 check inter 1s",
"systemctl restart haproxy",
"mkdir -p /var/lib/pgsql/data",
"chmod 777 /var/lib/pgsql/data",
"sudo podman run -d --name postgresql_database -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 registry.redhat.io/rhel8/postgresql-13:1-109",
"sudo podman exec -it postgresql_database /bin/bash -c 'echo \"SELECT * FROM pg_available_extensions\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'",
"name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL",
"sudo podman exec -it postgresql_database /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS pg_trgm;\" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb'",
"sudo podman exec -it postgresql_database /bin/bash -c 'echo \"SELECT * FROM pg_extension\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'",
"extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows)",
"sudo podman exec -it postgresql_database /bin/bash -c 'echo \"ALTER USER quayuser WITH SUPERUSER;\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'",
"ALTER ROLE",
"firewall-cmd --permanent --zone=trusted --add-port=5432/tcp",
"firewall-cmd --reload",
"yum install postgresql -y",
"psql -h localhost quaydb quayuser",
"Password for user test: psql (9.2.23, server 9.6.5) WARNING: psql version 9.2, server version 9.6. Some psql features might not work. Type \"help\" for help. test=> \\q",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms",
"USER_NAME=ansibleadmin useradd USDUSER_NAME -c \"Ansible administrator\" passwd USDUSER_NAME New password: ********* Retype new password: ********* cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF chmod 0440 /etc/sudoers.d/USDUSER_NAME",
"USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD mkdir ~/ceph-ansible-keys",
"USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD ssh-keygen [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph01 [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph02 [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph03 [ansibleadmin@ceph05 ~]USD exit #",
"yum install ceph-ansible",
"ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars",
"cd /usr/share/ceph-ansible cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml cp site.yml.sample site.yml",
"ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.122.0/24",
"osd_scenario: collocated devices: - /dev/sdb dmcrypt: true osd_auto_discovery: false",
"[mons] ceph01 ceph02 ceph03 [osds] ceph01 devices=\"[ '/dev/sdb' ]\" ceph02 devices=\"[ '/dev/sdb' ]\" ceph03 devices=\"[ '/dev/sdb' ]\" [mgrs] ceph01 devices=\"[ '/dev/sdb' ]\" ceph02 devices=\"[ '/dev/sdb' ]\" ceph03 devices=\"[ '/dev/sdb' ]\"",
"retry_files_save_path = ~/",
"USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD ansible all -m ping ceph01 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } ceph02 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } ceph03 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } [ansibleadmin@ceph05 ~]USD",
"[ansibleadmin@ceph05 ~]USD cd /usr/share/ceph-ansible/ [ansibleadmin@ceph05 ~]USD ansible-playbook site.yml",
"ceph health HEALTH_OK",
"ceph osd pool create test 8 echo 'Hello World!' > hello-world.txt rados --pool test put hello-world hello-world.txt rados --pool test get hello-world fetch.txt cat fetch.txt Hello World!",
"mkdir -p /var/lib/redis chmod 777 /var/lib/redis sudo podman run -d -p 6379:6379 -v /var/lib/redis:/var/lib/redis/data:Z registry.redhat.io/rhel8/redis-5",
"yum install telnet -y telnet 192.168.122.99 6379 Trying 192.168.122.99 Connected to 192.168.122.99. Escape character is '^]'. MONITOR +OK +1525703165.754099 [0 172.17.0.1:43848] \"PING\" QUIT +OK Connection closed by foreign host."
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploy_red_hat_quay_-_high_availability/preparing_for_red_hat_quay_high_availability |
Chapter 7. Displaying and Raising the Domain Level | Chapter 7. Displaying and Raising the Domain Level The domain level indicates what operations and capabilities are available in the IdM topology. Domain level 1 Examples of available functionality: simplified ipa-replica-install (see Section 4.5, "Creating the Replica: Introduction" ) enhanced topology management (see Chapter 6, Managing Replication Topology ) Important Domain level 1 was introduced in Red Hat Enterprise Linux 7.3 with IdM version 4.4. To use the domain level 1 features, all your replicas must be running Red Hat Enterprise Linux 7.3 or later. If your first server was installed with Red Hat Enterprise Linux 7.3, the domain level for your domain is automatically set to 1. If you upgrade all servers to IdM version 4.4 from earlier versions, the domain level is not raised automatically. If you want to use domain level 1 features, raise the domain level manually, as described in Section 7.2, "Raising the Domain Level" . Domain level 0 Examples of available functionality: ipa-replica-install requires a more complicated process of creating a replica information file on the initial server and copying it to the replica (see Section D.2, "Creating Replicas" ) more complicated and error-prone topology management using ipa-replica-manage and ipa-csreplica-manage (see Section D.3, "Managing Replicas and Replication Agreements" ) 7.1. Displaying the Current Domain Level Command Line: Displaying the Current Domain Level Log in as the administrator: Run the ipa domainlevel-get command: Web UI: Displaying the Current Domain Level Select IPA Server Topology Domain Level . | [
"kinit admin",
"ipa domainlevel-get ----------------------- Current domain level: 0 -----------------------"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/domain-level |
Appendix B. Ceph network configuration options | Appendix B. Ceph network configuration options These are the common network configuration options for Ceph. public_network Description The IP address and netmask of the public (front-side) network (for example, 192.168.0.0/24 ). Set in [global] . You can specify comma-delimited subnets. Type <ip-address>/<netmask> [, <ip-address>/<netmask>] Required No Default N/A public_addr Description The IP address for the public (front-side) network. Set for each daemon. Type IP Address Required No Default N/A cluster_network Description The IP address and netmask of the cluster network (for example, 10.0.0.0/24 ). Set in [global] . You can specify comma-delimited subnets. Type <ip-address>/<netmask> [, <ip-address>/<netmask>] Required No Default N/A cluster_addr Description The IP address for the cluster network. Set for each daemon. Type Address Required No Default N/A ms_type Description The messenger type for the network transport layer. Red Hat supports the simple and the async messenger type using posix semantics. Type String. Required No. Default async+posix ms_public_type Description The messenger type for the network transport layer of the public network. It operates identically to ms_type , but is applicable only to the public or front-side network. This setting enables Ceph to use a different messenger type for the public or front-side and cluster or back-side networks. Type String. Required No. Default None. ms_cluster_type Description The messenger type for the network transport layer of the cluster network. It operates identically to ms_type , but is applicable only to the cluster or back-side network. This setting enables Ceph to use a different messenger type for the public or front-side and cluster or back-side networks. Type String. Required No. Default None. Host options You must declare at least one Ceph Monitor in the Ceph configuration file, with a mon addr setting under each declared monitor. Ceph expects a host setting under each declared monitor, metadata server and OSD in the Ceph configuration file. Important Do not use localhost . Use the short name of the node, not the fully-qualified domain name (FQDN). Do not specify any value for host when using a third party deployment system that retrieves the node name for you. mon_addr Description A list of <hostname>:<port> entries that clients can use to connect to a Ceph monitor. If not set, Ceph searches [mon.*] sections. Type String Required No Default N/A host Description The host name. Use this setting for specific daemon instances (for example, [osd.0] ). Type String Required Yes, for daemon instances. Default localhost TCP options Ceph disables TCP buffering by default. ms_tcp_nodelay Description Ceph enables ms_tcp_nodelay so that each request is sent immediately (no buffering). Disabling Nagle's algorithm increases network traffic, which can introduce congestion. If you experience large numbers of small packets, you may try disabling ms_tcp_nodelay , but be aware that disabling it will generally increase latency. Type Boolean Required No Default true ms_tcp_rcvbuf Description The size of the socket buffer on the receiving end of a network connection. Disabled by default. Type 32-bit Integer Required No Default 0 ms_tcp_read_timeout Description If a client or daemon makes a request to another Ceph daemon and does not drop an unused connection, the tcp read timeout defines the connection as idle after the specified number of seconds. Type Unsigned 64-bit Integer Required No Default 900 15 minutes. Bind options The bind options configure the default port ranges for the Ceph OSD daemons. The default range is 6800:7100 . You can also enable Ceph daemons to bind to IPv6 addresses. Important Verify that the firewall configuration allows you to use the configured port range. ms_bind_port_min Description The minimum port number to which an OSD daemon will bind. Type 32-bit Integer Default 6800 Required No ms_bind_port_max Description The maximum port number to which an OSD daemon will bind. Type 32-bit Integer Default 7300 Required No. ms_bind_ipv6 Description Enables Ceph daemons to bind to IPv6 addresses. Type Boolean Default false Required No Asynchronous messenger options These Ceph messenger options configure the behavior of AsyncMessenger . ms_async_transport_type Description Transport type used by the AsyncMessenger . Red Hat supports the posix setting, but does not support the dpdk or rdma settings at this time. POSIX uses standard TCP/IP networking and is the default value. Other transport types are experimental and are NOT supported. Type String Required No Default posix ms_async_op_threads Description Initial number of worker threads used by each AsyncMessenger instance. This configuration setting SHOULD equal the number of replicas or erasure code chunks, but it may be set lower if the CPU core count is low or the number of OSDs on a single server is high. Type 64-bit Unsigned Integer Required No Default 3 ms_async_max_op_threads Description The maximum number of worker threads used by each AsyncMessenger instance. Set to lower values if the OSD host has limited CPU count, and increase if Ceph is underutilizing CPUs are underutilized. Type 64-bit Unsigned Integer Required No Default 5 ms_async_set_affinity Description Set to true to bind AsyncMessenger workers to particular CPU cores. Type Boolean Required No Default true ms_async_affinity_cores Description When ms_async_set_affinity is true , this string specifies how AsyncMessenger workers are bound to CPU cores. For example, 0,2 will bind workers #1 and #2 to CPU cores #0 and #2, respectively. NOTE: When manually setting affinity, make sure to not assign workers to virtual CPUs created as an effect of hyper threading or similar technology, because they are slower than physical CPU cores. Type String Required No Default (empty) ms_async_send_inline Description Send messages directly from the thread that generated them instead of queuing and sending from the AsyncMessenger thread. This option is known to decrease performance on systems with a lot of CPU cores, so it's disabled by default. Type Boolean Required No Default false | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-network-configuration-options_conf |
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] | Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] Description HelmChartRepository holds cluster-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the cluster.. 10.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the cluster/namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 10.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. url string Chart repository URL 10.1.3. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 10.1.4. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 10.1.5. .status Description Observed status of the repository within the cluster.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 10.1.6. .status.conditions Description conditions is a list of conditions and their statuses Type array 10.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 10.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/helmchartrepositories DELETE : delete collection of HelmChartRepository GET : list objects of kind HelmChartRepository POST : create a HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} DELETE : delete a HelmChartRepository GET : read the specified HelmChartRepository PATCH : partially update the specified HelmChartRepository PUT : replace the specified HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status GET : read status of the specified HelmChartRepository PATCH : partially update status of the specified HelmChartRepository PUT : replace status of the specified HelmChartRepository 10.2.1. /apis/helm.openshift.io/v1beta1/helmchartrepositories Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HelmChartRepository Table 10.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HelmChartRepository Table 10.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.5. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a HelmChartRepository Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 202 - Accepted HelmChartRepository schema 401 - Unauthorized Empty 10.2.2. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} Table 10.9. Global path parameters Parameter Type Description name string name of the HelmChartRepository Table 10.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a HelmChartRepository Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.12. Body parameters Parameter Type Description body DeleteOptions schema Table 10.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HelmChartRepository Table 10.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.15. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HelmChartRepository Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.17. Body parameters Parameter Type Description body Patch schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HelmChartRepository Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty 10.2.3. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status Table 10.22. Global path parameters Parameter Type Description name string name of the HelmChartRepository Table 10.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HelmChartRepository Table 10.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.25. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HelmChartRepository Table 10.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.27. Body parameters Parameter Type Description body Patch schema Table 10.28. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HelmChartRepository Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.30. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/helmchartrepository-helm-openshift-io-v1beta1 |
Chapter 3. Additional automated rule functions | Chapter 3. Additional automated rule functions From the Cryostat web console, you access additional automated rule capabilities, such as deleting an automated rule or copying JFR. If you created Cryostat 2.3, and then upgraded from Cryostat 2.3 to Cryostat 2.4, Cryostat 2.4 automatically detects these automated rules. 3.1. Automated rule migration Cryostat 2.4 automatically scans for any automated rules that show in the Automated Rules table. If you created an automated rule with Cryostat 2.3, and then upgraded to Cryostat 2.4, Cryostat 2.4 can apply this rule to any selected applicable JVM applications. No differences exist on how Cryostat tests the JVM application against older automated rules. If Cryostat detects a match between the rule definition and a selected JVM application, Cryostat automatically starts a JFR recording of the application. Cryostat 2.4 bases the JFR recording's settings only on what you defined in the automated rule. This means that you do not need to reconfigure your automated rule to facilitate its migration to Cryostat 2.4. 3.2. Copying JFR data You can copy information from a JVM application's memory to Cryostat's archive storage location on the OpenShift Container Platform (OCP). During the creation of an automated rule through the Cryostat web console, you can set a value in the Archival Period field. You can specify a numerical value in seconds, minutes, or hours. After you create the automated rule with a specified archival period, Cryostat re-connects with any targeted JVM applications that match the rule. Cryostat then copies any generated JFR recording data from the application's memory to Cryostat's archive storage location. Additionally, you can populate the Preserved Archives field with a value. This field sets a limit to the amount of copies of a JFR recording that Cryostat can move from an application's memory to Cryostat's archive storage location. For example, if you set a value of 10 in the Preserved Archives field, Cryostat will not store more than 10 copies of the file in the archive storage location. When Cryostat generates a new copy of the file that exceeds the limit, Cryostat replaces the oldest version with the newest version of the file. You can also set a size limit for a JFR recording file and specify a time limit for how long a file is stored in the target JVM application's memory by specifying values for the Maximum Size and Maximum Age parameters. Prerequisites Created a Cryostat instance in your Red Hat OpenShift project. Created a Java application. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click Create . The Create window opens. Enter values in any mandatory fields, such as the Match Expression field. In the Archival Period field, specify a value in seconds, minutes, or hours. In the Preserved Archives field, enter the number of archived recording copies to preserve. To create the automated rule, click Create . The Automated Rules window opens and displays your automated rule in a table. 3.3. Deleting an automated rule The Cryostat web console that runs on the OpenShift Container Platform (OCP) provides a simplified method for deleting a rule definition. You can also use the curl tool to delete an automated rule. The curl tool communicates with your Cryostat instance by using the DELETE endpoint. In the request, you can specify the clean=true parameter , which stops all active Java Flight Recordings (JFRs) started by the selected rule. Prerequisites Logged in to the Cryostat web console. Created an automated rule. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens and displays all existing automated rules in a table. Note If you have not created an automated rule, only a Create button appears on the Automated Rules window. In the table, select the automated rule that you want to delete. Click the more options icon (...) , then click Delete . Figure 3.1. Delete option from the Automated Rules table The Permanently delete your Automated Rule window opens. To delete the selected automated rule, click Delete . If you want to also stop any active recordings that were created by the selected rule, select Clean then click Delete . Cryostat deletes your automated rule permanently. Revised on 2023-12-12 18:12:02 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/additional-automated-rule-functions_con_metadata-labels-auto-rules |
1.10. Using Directory Server Plug-ins | 1.10. Using Directory Server Plug-ins Directory Server provides several core plug-ins, such as for replication, class of service, and attribute syntax validation. Core plug-ins are enabled by default. Additionally, the Directory Server packages contain further plug-ins to enhance the functionality, such as for attribute uniqueness and attribute linking. However, not all of these plug-ins are enabled by default. For further details, see the Plug-in Implemented Server Functionality Reference chapter in the Red Hat Directory Server Configuration, Command, and File Reference . 1.10.1. Listing Available Plug-ins 1.10.1.1. Listing Available Plug-ins Using the Command Line To list all available plug-ins using the command line: You require the exact name of the plug-in, for example, to enable or disable it using the command line. 1.10.1.2. Listing Available Plug-ins Using the Web Console To display all available plug-ins using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Optionally, you can filter the plug-ins by entering a name into the Filter Plugins field. 1.10.2. Enabling and Disabling Plug-ins 1.10.2.1. Enabling and Disabling Plug-ins Using the Command Line To enable or disable a plug-in using the command line, use the dsconf utility. Note The dsconf command requires that you provide the name of the plug-in. For details about displaying the names of all plug-ins, see Section 1.10.1.1, "Listing Available Plug-ins Using the Command Line" . For example, to enable the Automember plug-in: Enable the plug-in: Restart the instance: 1.10.2.2. Enabling and Disabling Plug-ins Using the Web Console To enable or disable a plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the All Plugins tab. Click the Edit Plugin button to the right of the plug-in you want to enable or disable. Change the status to ON to enable or to OFF to disable the plug-in. Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 1.10.3. Configuring Plug-ins 1.10.3.1. Configuring Plug-ins Using the Command Line To configure plug-in settings, use the dsconf plugin command: For a list of plug-ins you can configure, enter: 1.10.3.2. Configuring Plug-ins Using the Web Console To configure a plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the All Plugins tab. Select the plug-in and click Show Advanced Settings . Open the plug-in-specific tab. Set the appropriate settings. Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 1.10.4. Setting the Plug-in Precedence The plug-in precedence is the priority it has in the execution order of plug-ins. For pre- and post-operation plug-ins, this enables a plug-in to be executed and complete before the plug-in is initiated, to let the plug-in take advantage of the plug-in's results. The precedence can be set to a value from 1 (highest priority) to 99 (lowest priority). If no precedence is set, the default is 50 . Warning Set a precedence value only in custom plug-ins. Updating the value of core plug-ins can cause Directory Server to not work as expected and is not supported by Red Hat. 1.10.4.1. Setting the Plug-in Precedence Using the Command Line To update the precedence value of a plug-in using the command line: Set precedence of the plug-in. For example, to set the precedence for the example plug-in to 1 : Restart the instance: 1.10.4.2. Setting the Plug-in Precedence Using the Web Console To update the precedence value of a plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select All Plugins . Press the Edit Plugin button to the plug-in for which you want to configure the precedence value. Update the value in the Plugin Precedence field. Click Save . Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin list 7-bit check Account Policy Plugin",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin automember enable",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin plug-in-specific_subcommand",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin --help",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin edit example --precedence 1",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using_directory_server_plug-ins |
4.20. IBM BladeCenter over SNMP | 4.20. IBM BladeCenter over SNMP Table 4.21, "IBM BladeCenter SNMP" lists the fence device parameters used by fence_ibmblade , the fence agent for IBM BladeCenter over SNMP. Table 4.21. IBM BladeCenter SNMP luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter SNMP device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.15, "IBM BladeCenter SNMP" shows the configuration screen for adding an IBM BladeCenter SNMP fence device. Figure 4.15. IBM BladeCenter SNMP The following command creates a fence device instance for an IBM BladeCenter SNMP device: The following is the cluster.conf entry for the fence_ibmblade device: | [
"ccs -f cluster.conf --addfencedev bladesnmp1 agent=fence_ibmblade community=private ipaddr=192.168.0.1 login=root passwd=password123 snmp_priv_passwd=snmpasswd123 power_wait=60",
"<fencedevices> <fencedevice agent=\"fence_ibmblade\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"bladesnmp1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"snmpasswd123\" udpport=\"161\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-bladectr-snmp-CA |
Chapter 9. Adding file and object storage to an existing external OpenShift Data Foundation cluster | Chapter 9. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. | [
"oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py",
"oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster |
19.2. Remove Red Hat JBoss Data Grid from Your Windows System | 19.2. Remove Red Hat JBoss Data Grid from Your Windows System The following procedures contain instructions to remove Red Hat JBoss Data Grid from your Microsoft Windows system. Warning Once deleted, all JBoss Data Grid configuration and settings are permanently lost. Procedure 19.2. Remove JBoss Data Grid from Your Windows System Shut Down Server Ensure that the JBoss Data Grid server is shut down. Navigate to the JBoss Data Grid Home Directory Use the Windows Explorer to navigate to the directory in which the USDJDG_HOME folder is located. Delete the JBoss Data Grid Home Directory Select the USDJDG_HOME folder and delete it. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/remove_red_hat_jboss_data_grid_from_your_windows_system |
Installing on a single node | Installing on a single node OpenShift Container Platform 4.14 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team | [
"example.com",
"<cluster_name>.example.com",
"export OCP_VERSION=<ocp_version> 1",
"export ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'",
"coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso",
"./openshift-install --dir=ocp wait-for install-complete",
"export KUBECONFIG=ocp/auth/kubeconfig",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.27.3",
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"variant: openshift version: 4.14.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'",
"butane -pr embedded.yaml -o embedded.ign",
"coreos-installer iso ignition embed -i embedded.ign rhcos-4.14.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.14.0-x86_64-live.x86_64.iso",
"coreos-installer iso ignition show rhcos-sshd-4.14.0-x86_64-live.x86_64.iso",
"{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=<rhcos_liveos>:8080/rootfs.img \\ 1 ignition.firstboot ignition.platform.id=metal ignition.config.url=<rhcos_ign>:8080/ignition/bootstrap-in-place-for-live-iso.ign \\ 2 ip=encbdd0:dhcp::02:00:00:02:34:02 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 zfcp.allow_lun_scan=0 rd.luks.options=discard",
"cp ipl c",
"cp i <devno> clear loadparm prompt",
"cp vi vmsg 0 <kernel_parameters>",
"cp set loaddev portname <wwpn> lun <lun>",
"cp set loaddev bootprog <n>",
"cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'",
"cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'",
"cp i <devno>",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"virt-install --name <vm_name> --autostart --memory=<memory_mb> --cpu host --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 1 --disk size=100 --network network=<virt_network_parm> --graphics none --noautoconsole --extra-args \"ip=<ip>::<gateway>:<mask>:<hostname>::none\" --extra-args \"nameserver=<name_server>\" --extra-args \"ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot\" --extra-args \"coreos.live.rootfs_url=<rhcos_liveos>\" \\ 2 --extra-args \"ignition.config.url=<rhcos_ign>\" \\ 3 --extra-args \"random.trust_cpu=on rd.luks.options=discard\" --extra-args \"console=ttysclp0\" --wait",
"grub2-mknetdir --net-directory=/var/lib/tftpboot",
"default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo \"Loading initrd\" initrd \"/rhcos/initramfs.img\" } fi",
"export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/",
"cd /var/lib/tftpboot/rhcos",
"wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel",
"wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img",
"cd /var//var/www/html/install/",
"wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img",
"mkdir -p ~/sno-work",
"cd ~/sno-work",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz",
"tar xzvf openshift-install-linux-4.12.0.tar.gz",
"./openshift-install --dir=~/sno-work create create single-node-ignition-config",
"cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign",
"restorecon -vR /var/www/html || true",
"lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>",
"./openshift-install wait-for bootstrap-complete",
"./openshift-install wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_a_single_node/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Using the Direct Documentation Feedback (DDF) function Use the Add Feedback DDF function for direct comments on specific sentences, paragraphs, or code blocks. View the documentation in the HTML format. Ensure that you see the Feedback button in the upper right corner of the document. Highlight the part of text that you want to comment on. Click Add Feedback . Complete the Add Feedback field with your comments. Optional: Add your email address so that the documentation team can contact you for clarification on your issue. Click Submit . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuration_reference/proc_providing-feedback-on-red-hat-documentation |
4.7. Displaying Cluster Status | 4.7. Displaying Cluster Status The following command displays the current status of the cluster and the cluster resources. You can display a subset of information about the current status of the cluster with the following commands. The following command displays the status of the cluster, but not the cluster resources. The following command displays the status of the cluster resources. | [
"pcs status",
"pcs cluster status",
"pcs status resources"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-clusterstat-haar |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.22/making-open-source-more-inclusive |
1.3. DM-Multipath Components | 1.3. DM-Multipath Components Table 1.1, "DM-Multipath Components" . describes the components of DM-Multipath. Table 1.1. DM-Multipath Components Component Description dm-multipath kernel module Reroutes I/O and supports failover for paths and path groups. multipath command Lists and configures multipath devices. Normally started up with /etc/rc.sysinit , it can also be started up by a udev program whenever a block device is added or it can be run by the initramfs file system. multipathd daemon Monitors paths; as paths fail and come back, it may initiate path group switches. Provides for interactive changes to multipath devices. This must be restarted for any changes to the /etc/multipath.conf file. kpartx command Creates device mapper devices for the partitions on a device It is necessary to use this command for DOS-based partitions with DM-MP. The kpartx is provided in its own package, but the device-mapper-multipath package depends on it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/mpio_components |
Chapter 23. Manipulating the Domain XML | Chapter 23. Manipulating the Domain XML This chapter explains in detail the components of guest virtual machine XML configuration files, also known as domain XML . In this chapter, the term domain refers to the root <domain> element required for all guest virtual machines. The domain XML has two attributes: type and id . type specifies the hypervisor used for running the domain. The allowed values are driver-specific, but include KVM and others. id is a unique integer identifier for the running guest virtual machine. Inactive machines have no id value. The sections in this chapter will describe the components of the domain XML. Additional chapters in this manual may see this chapter when manipulation of the domain XML is required. Important Use only supported management interfaces (such as virsh and the Virtual Machine Manager ) and commands (such as virt-xml ) to edit the components of the domain XML file. Do not open and edit the domain XML file directly with a text editor. If you absolutely must edit the domain XML file directly, use the virsh edit command. Note This chapter is based on the libvirt upstream documentation . 23.1. General Information and Metadata This information is in this part of the domain XML: <domain type='kvm' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>A human readable description</description> <metadata> <app1:foo xmlns:app1="http://app1.org/app1/">..</app1:foo> <app2:bar xmlns:app2="http://app1.org/app2/">..</app2:bar> </metadata> ... </domain> Figure 23.1. Domain XML metadata The components of this section of the domain XML are as follows: Table 23.1. General metadata elements Element Description <name> Assigns a name for the virtual machine. This name should consist only of alpha-numeric characters and is required to be unique within the scope of a single host physical machine. It is often used to form the file name for storing the persistent configuration files. <uuid> Assigns a globally unique identifier for the virtual machine. The format must be RFC 4122-compliant, for example 3e3fce45-4f53-4fa7-bb32-11f34168b82b . If omitted when defining or creating a new machine, a random UUID is generated. It is also possible to provide the UUID using a sysinfo specification. <title> Creates space for a short description of the domain. The title should not contain any new lines. <description> Different from the title, this data is not used by libvirt. It can contain any information the user chooses to display. <metadata> Can be used by applications to store custom metadata in the form of XML nodes/trees. Applications must use custom name spaces on XML nodes/trees, with only one top-level element per name space (if the application needs structure, they should have sub-elements to their name space element). | [
"<domain type='kvm' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>A human readable description</description> <metadata> <app1:foo xmlns:app1=\"http://app1.org/app1/\">..</app1:foo> <app2:bar xmlns:app2=\"http://app1.org/app2/\">..</app2:bar> </metadata> </domain>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-Manipulating_the_domain_xml |
Chapter 4. Technology previews | Chapter 4. Technology previews This section describes the technology preview features introduced in Red Hat OpenShift Data Foundation 4.14 under Technology Preview support limitations. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Technology Preview features are provided with a limited support scope, as detailed on the Customer Portal: Technology Preview Features Support Scope . 4.1. Allow storage classes to use non resilient pool single replica OpenShift Data Foundation allows the option to create a new storage class using non resilient pool with a single replica. This avoids redundant data copies and allows resiliency management on the application level. For more information, see the deployment guide for your platform on the customer portal . 4.2. Metro-DR support for workloads based on OpenShift Virtualization You can now easily set up Metro-DR to protect your workloads based on OpenShift Virtualization using OpenShift Data Foundation. For more information, see Knowledgebase article . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/technology_previews |
Chapter 7. Federal Information Processing Standard (FIPS) | Chapter 7. Federal Information Processing Standard (FIPS) Red Hat Ceph Storage uses FIPS validated cryptography modules when run on Red Hat Enterprise Linux 8.4. Enable FIPS mode on Red Hat Enterprise Linux either during system installation or after it. For container deployments, follow the instructions in the Red Hat Enterprise Linux 8 Security Hardening Guide . Additional Resources Refer to the US Government Standards to know the latest information on FIPS validations. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/data_security_and_hardening_guide/federal-information-processing-standard-fips-sec |
Chapter 3. Consumer configuration properties | Chapter 3. Consumer configuration properties key.deserializer Type: class Importance: high Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface. value.deserializer Type: class Importance: high Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). fetch.min.bytes Type: int Default: 1 Valid Values: [0,... ] Importance: high The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as that many byte(s) of data is available or the fetch request times out waiting for data to arrive. Setting this to a larger value will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency. group.id Type: string Default: null Importance: high A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy. heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. max.partition.fetch.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: high The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size. session.timeout.ms Type: int Default: 45000 (45 seconds) Importance: high The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. allow.auto.create.topics Type: boolean Default: true Importance: medium Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using auto.create.topics.enable broker configuration. This configuration must be set to false when using brokers older than 0.11.0. auto.offset.reset Type: string Default: latest Valid Values: [latest, earliest, none] Importance: medium What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset none: throw exception to the consumer if no offset is found for the consumer's group anything else: throw exception to the consumer. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. default.api.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter. enable.auto.commit Type: boolean Default: true Importance: medium If true the consumer's offset will be periodically committed in the background. exclude.internal.topics Type: boolean Default: true Importance: medium Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic. fetch.max.bytes Type: int Default: 52428800 (50 mebibytes) Valid Values: [0,... ] Importance: medium The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. group.instance.id Type: string Default: null Valid Values: non-empty string Importance: medium A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. isolation.level Type: string Default: read_uncommitted Valid Values: [read_committed, read_uncommitted] Importance: medium Controls how to read messages written transactionally. If set to read_committed , consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. Messages will always be returned in offset order. Hence, in read_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read_committed consumers will not be able to read up to the high watermark when there are in flight transactions. max.poll.interval.ms Type: int Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null group.instance.id which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of session.timeout.ms . This mirrors the behavior of a static consumer which has shutdown. max.poll.records Type: int Default: 500 Valid Values: [1,... ] Importance: medium The maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll. partition.assignment.strategy Type: list Default: class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor Valid Values: non-null string Importance: medium A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used. Available options are: org.apache.kafka.clients.consumer.RangeAssignor : Assigns partitions on a per-topic basis. org.apache.kafka.clients.consumer.RoundRobinAssignor : Assigns partitions to consumers in a round-robin fashion. org.apache.kafka.clients.consumer.StickyAssignor : Guarantees an assignment that is maximally balanced while preserving as many existing partition assignments as possible. org.apache.kafka.clients.consumer.CooperativeStickyAssignor : Follows the same StickyAssignor logic, but allows for cooperative rebalancing. The default assignor is [RangeAssignor, CooperativeStickyAssignor], which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list. Implementing the org.apache.kafka.clients.consumer.ConsumerPartitionAssignor interface allows you to plug in a custom assignment strategy. receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. auto.commit.interval.ms Type: int Default: 5000 (5 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true . auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. check.crcs Type: boolean Default: true Importance: low Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. client.rack Type: string Default: "" Importance: low A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'. fetch.max.wait.ms Type: int Default: 500 Valid Values: [0,... ] Importance: low The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.consumer.ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. | [
"Further, when in `read_committed` the seekToEnd method will return the LSO ."
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/kafka_configuration_properties/consumer-configuration-properties-str |
Chapter 6. Kubernetes Configuration | Chapter 6. Kubernetes Configuration 6.1. Overview Important Procedures and software described in this chapter for manually configuring and using Kubernetes are deprecated and, therefore, no longer supported. For information on which software and documentation are impacted, see the Red Hat Enterprise Linux Atomic Host Release Notes . For information on Red Hat's officially supported Kubernetes-based products, refer to Red Hat OpenShift Container Platform , OpenShift Online , OpenShift Dedicated , OpenShift.io , Container Development Kit or Development Suite . Kubernetes reads YAML files to configure services, pods and replication controllers. This document describes the similarities and differences between these areas and details the names and expected data types of the various files. 6.2. Design Strategy Kubernetes uses environment variables whose names are partially specified by the service configuration, so normally you would design the services first, followed by the pods, followed by the replication controllers. This ordering is "outside-in", moving from the user-facing portion to the internal management portion. Of course, you are free to design the system in any order you wish. However, you might find that you require more iterations to arrive at a good set of configuration files if you don't start with services. 6.3. Conventions In this document, we say field name and field value instead of key and value , respectively. For brevity and consistency with upstream Kubernetes documentation, we say map instead of mapping . As the field value can often be a complex structure , we call the combination of field name and value together a <field> tree or <field> structure , regardless of the complexity of the field value. For example, here is a map, with two top-level structures, one and two : one: a: [ x, y, z ] b: [ q, r, s ] two: 42 The one tree is a map with two elements, the a tree and the b tree, while the two tree is very simple: field name is two and field value is 42. The field values for both a and b are lists. 6.3.1. Extended Types We conceptually extend the YAML type system to include some sub-types of string and some more precise sub-types of number : symbol This is a string that has no internal whitespace, comma, colon, curly-braces or square-braces. As such, it does not require double-quotes. For example, all the field names and the first two values in the following map are symbols. a: one-is-a-lonely-number b: "two-s-company" c: 3's a crowd Note that the field value for b is a symbol even though it is written with double-quotes. The double-quotes are unnecessary, but not invalid. The other way to think about it is: If you need to use double-quotes, you are not writing a symbol. enum This is a symbol taken from a pre-specified, finite, set. v4addr This is an IPv4 address in dots-and-numbers notatation (e.g., 127.0.0.1 ). opt-v4addr This is either a v4addr or the symbol None . integer This is a number with neither fractional part nor decimal point. resource-quantity This is a number optionally followed by a scaling suffix. Suffix Scale Example Equivalence (none) 1 19 19 ( 19 * 1 ) m 1e-3 200m 0.2 ( 200 * 1e-3 ) K 1e+3 4K 4000 ( 4 * 1e+3 ) Ki 2^10 4Ki 4096 ( 4 * 2^10 ) M 1e+6 6.5M 6500000 ( 6.5 * 1e+6 ) Mi 2^20 6.5Mi 6815744 ( 6.5 * 2^20 ) G 1e+9 0.4G 400000000 ( 0.4 * 1e+9 ) Gi 2^30 0.4Gi 429496729 ( 0.4 * 2^30 ) T 1e+12 37T 37000000000000 ( 37 * 1e+12 ) Ti 2^40 37Ti 40681930227712 ( 37 * 2^40 ) P 1e+15 9.8P 9800000000000000 ( 9.8 * 1e+15 ) Pi 2^50 9.8Pi 11033819087057716 ( 9.8 * 2^50 ) E 1e+18 0.42E 420000000000000000 ( 0.42 * 1e+18 ) Ei 2^60 0.42Ei 484227031934875712 ( 0.42 * 2^60 ) Note: The suffix is case-sensitive; m and M differ. 6.3.2. Full Name The last convention relates to the full name of a field. All field names are symbols. At the top-level, the full name of a field is identical to the field name. At each sub-level, the full name of a field is the full name of the parent structure followed by a " . " (period, U+2E ) followed by the name of the field. Here is a map with two top-level items, one and two : one: a: x: 9 y: 10 z: 11 b: q: 19 r: 20 s: 21 two: 42 The value of one is a sub-map, while the value of two is a simple number. The following table shows all the field names and full names. Name Full Name Depth one one 0 (top-level) two two 0 a one.a 1 b one.b 1 x one.a.x 2 y one.a.y 2 z one.a.z 2 q one.b.q 2 r one.b.r 2 s one.b.s 2 6.4. Common Structures All configuration files are maps at the top-level, with a few required fields and a series of optional ones. In most cases field values are basic data elements, but sometimes the value is a list or a sub-map. In a map, the order does not matter, although it is traditional to place the required fields first. 6.4.1. Top-Level The top-level fields are kind , apiVersion , metadata , and spec . kind (enum, one of: Service , Pod , ReplicationController ) This specifies what the configuration file is trying to configure. Although Kubernetes can usually infer kind from context, the slight redundancy of specifying it in the configuration file ensures that type errors are caught early. apiVersion (enum) This specifies which version of the API is used in the configuration file. In this document all examples use apiVersion: v1 . metadata (map) This is a top-level field for Service, Pod and ReplicationController files and additionally found as a member of the ReplicationController's template map. Common sub-fields (all optional unless otherwise indicated) are: Field Type Comment name symbol Required namespace symbol Default is default labels map See individual types Strictly speaking, metadata is optional. However, we recommend including it along with the others, anyway, because name and labels facilitate later manipulation of the Service, Pod or ReplicationController. spec (map) This field is the subject of the rest of this document. 6.4.2. Elsewhere The other fields described in this section are common, in the sense of being found in more than one context, but not at top-level. labels (map) This is often one of the fields in the metadata map. Valid label keys have two segements: The prefix and " / " (slash, U+2F ) portions are optional. The name portion is required and must be 1-63 characters in length. It must begin and end with with an alphanumeric character (i.e., [0-9A-Za-z] ). The internal characters of name may include hyphen, dot and underscore. Here are some label keys, valid and invalid: Label Keys Prefix Name Comments prefix/name prefix name just-a-name (n/a) just-a-name -simply-wrong! (n/a) (n/a) beg and end not alphanumeric example.org/service example.org service looks like a domain! In the following example, labels and name comprise the map value of field metadata and the map value of labels has only one key/value pair. metadata: labels: name: rabbitmq name: rabbitmq-controller Note that in this example metadata.labels.name and metadata.name differ. selector (map) This is often one of the fields in the spec map of a Service or ReplicationController, but is also found at top-level. The map specifies field names and values that must match in order for the configured object to receive traffic. For example, the following fragment matches the labels example above. spec: selector: name: rabbitmq protocol (enum, one of: TCP , UDP ) This specifies an IP protocol. port (integer) The field value is the TCP/UDP port where the service, pod or replication controller can be contacted for administration and control purposes. Similar fields are containerPort , hostPort and targetPort . Often, port is found in the same map with name and protocol . For example, here is a fragment that shows a list of two such maps as the value for field ports : ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP In this example, the port in both maps is identical, while the name and protocol differ. limits (map) The field value is a sub-map associating resource types with resource-quantity values. For limits the quantities describe maximum allowable values. A similar field is request , which describes desired values. Valid resource types are cpu and memory . The units for cpu are Kubernetes Compute Unit seconds/second (i.e., CPU cores normalized to a canonical "Kubernetes CPU"). The units for memory are bytes. In the following fragment, cpu is limited to 0.1 KCU and memory to 2GiB. resources: limits: cpu: 100m memory: 2Gi As shown here, the limits field is often found as part of the map value for the resources field. 6.5. Specific Structures The following subsections list fields found in the various configuration files apart from those in Common Structures . A field value's type is either one of the elemental data types, including those listed in Conventions , map or list . Each subsection also discusses pitfalls for that particular file. 6.5.1. Service At the most basic level, Kubernetes can be configured with one Service YAML and one Pod YAML. In the service YAML, the required field kind has value Service . The spec tree should include ports , and optionally, selector and type . The value of type is an enum, one of: ClusterIP (the default if type is unspecified), NodePort , LoadBalancer . Here is an example of a basic Service YAML: kind: Service apiVersion: v1 metadata: name: blog spec: ports: - containerPort: 4567 targetPort: 80 selector: name: blog type: LoadBalancer Note that name: blog is indented by two columns to signify it being part of the sub-map value of both metadata and selector trees. Warning Omitting the indentation of metadata.name places name at top-level and gives metadata a nil value. Each container's port 4567 is visible externally as port 80, and they are accessed in a round-robin manner because of `type: LoadBalancer'. 6.5.2. Pod In the pod YAML, the required field kind has value Pod . The spec tree should include containers and optionally volumes fields. Their values are both a list of maps. Each element of containers specifies an image , with a name and other fields that describe how the image is to be run (e.g., privileged , resources ), what ports it exposes, and what volume mounts it requires. Each element of volumes specfies a hostPath , with a name . apiVersion: v1 kind: Pod metadata: name: host-test spec: containers: - image: nginx name: host-test privileged: false volumeMounts: - mountPath: /usr/share/nginx/html name: srv readOnly: false volumes: - hostPath: path: /srv/my-data name: srv This example specifies the webserver nginx to be run unprivileged and with access to the host directory /srv/my-data visible internally as /usr/share/nginx/html . 6.5.3. Replication Controller In the replication controller YAML, the required field kind has value ReplicationController . The spec.replicas field specifies how the pod should be horizontally scaled , that is, how many copies of a pod should be active simultaneously. The spec tree also has a template tree, which in turn has a sub- spec tree that resembles the spec tree from a Pod YAML. apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 3 1 template: metadata: labels: app: nginx spec: 2 volumes: - name: secret-volume secret: secretName: nginxsecret containers: - name: nginxhttps image: bprashanth/nginxhttps:1.0 ports: - containerPort: 443 - containerPort: 80 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume 1 Kubernetes will try to maintain three active copies. 2 This sub- spec tree is essentially a Pod spec tree. 6.6. Field Reference The following table lists all fields found the files, apart from those in Common Structures . A field value's type is either one of the elemental data types (including those listed in Conventions ), map, or list. For the Context column, the code is s for services, p for pods, r for replication controllers. Field Type Context Example / Comment desiredState map clusterIP opt-v4addr s 10.254.100.50 selector map s one element, key name replicas integer r 2 replicaSelector map r one field: selectorname podTemplate map r two fields: desiredState , labels manifest map r version string r like apiVersion containers map pr image string pr selectorname string r deprecatedPublicIPs list s each element is an v4addr privileged boolean pr resources map pr imagePullPolicy enum pr Always , Never , IfNotPresent command list of strings pr for docker run | [
"one: a: [ x, y, z ] b: [ q, r, s ] two: 42",
"a: one-is-a-lonely-number b: \"two-s-company\" c: 3's a crowd",
"one: a: x: 9 y: 10 z: 11 b: q: 19 r: 20 s: 21 two: 42",
"[prefix/]name",
"metadata: labels: name: rabbitmq name: rabbitmq-controller",
"spec: selector: name: rabbitmq",
"ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP",
"resources: limits: cpu: 100m memory: 2Gi",
"kind: Service apiVersion: v1 metadata: name: blog spec: ports: - containerPort: 4567 targetPort: 80 selector: name: blog type: LoadBalancer",
"apiVersion: v1 kind: Pod metadata: name: host-test spec: containers: - image: nginx name: host-test privileged: false volumeMounts: - mountPath: /usr/share/nginx/html name: srv readOnly: false volumes: - hostPath: path: /srv/my-data name: srv",
"apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 3 1 template: metadata: labels: app: nginx spec: 2 volumes: - name: secret-volume secret: secretName: nginxsecret containers: - name: nginxhttps image: bprashanth/nginxhttps:1.0 ports: - containerPort: 443 - containerPort: 80 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/kubernetes_configuration |
Chapter 11. Uninstalling a cluster on IBM Cloud | Chapter 11. Uninstalling a cluster on IBM Cloud You can remove a cluster that you deployed to IBM Cloud(R). 11.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IC_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IC_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_cloud/uninstalling-cluster-ibm-cloud |
Chapter 15. Installing an IdM client with Kickstart | Chapter 15. Installing an IdM client with Kickstart A Kickstart enrollment automatically adds a new system to the Identity Management (IdM) domain at the time Red Hat Enterprise Linux is installed. 15.1. Installing a client with Kickstart Follow this procedure to use a Kickstart file to install an Identity Management (IdM) client. Prerequisites Do not start the sshd service prior to the kickstart enrollment. Starting sshd before enrolling the client generates the SSH keys automatically, but the Kickstart file in Section 15.2, "Kickstart file for client installation" uses a script for the same purpose, which is the preferred solution. Procedure Pre-create the host entry on the IdM server, and set a temporary password for the entry: The password is used by Kickstart to authenticate during the client installation and expires after the first authentication attempt. After the client is successfully installed, it authenticates using its keytab. Create a Kickstart file with the contents described in Section 15.2, "Kickstart file for client installation" . Make sure that network is configured properly in the Kickstart file using the network command. Use the Kickstart file to install the IdM client. 15.2. Kickstart file for client installation You can use a Kickstart file to install an Identity Management (IdM) client. The contents of the Kickstart file must meet certain requirements as outlined here. The ipa-client package in the list of packages to install Add the ipa-client package to the %packages section of the Kickstart file. For example: Post-installation instructions for the IdM client The post-installation instructions must include: An instruction for ensuring SSH keys are generated before enrollment An instruction to run the ipa-client-install utility, while specifying: All the required information to access and configure the IdM domain services The password which you set when pre-creating the client host on the IdM server. in Section 15.1, "Installing a client with Kickstart" . For example, the post-installation instructions for a Kickstart installation that uses a one-time password and retrieves the required options from the command line rather than via DNS can look like this: Optionally, you can also include other options in the Kickstart file, such as: For a non-interactive installation, add the --unattended option to ipa-client-install . To let the client installation script request a certificate for the machine: Add the --request-cert option to ipa-client-install . Set the system bus address to /dev/null for both the getcert and ipa-client-install utility in the Kickstart chroot environment. To do this, add these lines to the post-installation instructions in the Kickstart file before the ipa-client-install instruction: 15.3. Testing an IdM client The command line informs you that the ipa-client-install was successful, but you can also do your own test. To test that the Identity Management (IdM) client can obtain information about users defined on the server, check that you are able to resolve a user defined on the server. For example, to check the default admin user: To test that authentication works correctly, su to a root user from a non-root user: | [
"ipa host-add client.example.com --password= secret",
"%packages ipa-client",
"%post --log=/root/ks-post.log Generate SSH keys; ipa-client-install uploads them to the IdM server by default /usr/libexec/openssh/sshd-keygen rsa Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com",
"env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-ipa-client-with-kickstart_installing-identity-management |
Subsets and Splits