title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 32. Externalize Sessions | Chapter 32. Externalize Sessions 32.1. Externalize HTTP Session from JBoss EAP 6.4 and later to JBoss Data Grid Red Hat JBoss Data Grid can be used as an external cache container for application specific data in JBoss Enterprise Application Platform (EAP), such as HTTP Sessions. This allows scaling of the data layer independent of the application, and enables different EAP clusters, that may reside in various domains, to access data from the same JBoss Data Grid cluster. Additionally, other applications can interface with the caches presented by Red Hat JBoss Data Grid. Note The following procedures have been tested and confirmed to function on JBoss EAP 6.4 and JBoss Data Grid 6.5. Externalizing HTTP Sessions should only be used on these, or later, versions of each product. The below procedure applies for both standalone and domain mode of EAP; however, in domain mode each server group requires a unique remote cache configured. While multiple server groups can utilize the same Red Hat JBoss Data Grid cluster the respective remote caches will be unique to the EAP server group. Note For each distributable application, an entirely new cache must be created. It can be created in an existing cache container, for example, web. Procedure 32.1. Externalize HTTP Sessions Ensure the remote cache containers are defined in EAP's infinispan subsystem; in the example below the cache attribute in the remote-store element defines the cache name on the remote JBoss Data Grid server: Define the location of the remote Red Hat JBoss Data Grid server by adding the networking information to the socket-binding-group : Repeat the above steps for each cache-container and each Red Hat JBoss Data Grid server. Each server defined must have a separate <outbound-socket-binding> element defined. Add passivation and cache information into the application's jboss-web.xml . In the following example cacheContainer is the name of the cache container, and default-cache is the name of the default cache located in this container. An example file is shown below: Note The passivation timeouts above are provided assuming that a typical session is abandoned within 15 minutes and uses the default HTTP session timeout in JBoss EAP of 30 minutes. These values may need to be adjusted based on each application's workload. Report a bug | [
"<subsystem xmlns=\"urn:jboss:domain:infinispan:1.5\"> [...] <cache-container name=\"cacheContainer\" default-cache=\"default-cache\" module=\"org.jboss.as.clustering.web.infinispan\" statistics-enabled=\"true\"> <transport lock-timeout=\"60000\"/> <replicated-cache name=\"default-cache\" mode=\"SYNC\" batching=\"true\"> <remote-store cache=\"default\" socket-timeout=\"60000\" preload=\"true\" passivation=\"false\" purge=\"false\" shared=\"true\"> <remote-server outbound-socket-binding=\"remote-jdg-server1\"/> <remote-server outbound-socket-binding=\"remote-jdg-server2\"/> </remote-store> </replicated-cache> </cache-container> </subsystem>",
"<socket-binding-group ...> <outbound-socket-binding name=\"remote-jdg-server1\"> <remote-destination host=\"JDGHostName1\" port=\"11222\"/> </outbound-socket-binding> <outbound-socket-binding name=\"remote-jdg-server2\"> <remote-destination host=\"JDGHostName2\" port=\"11222\"/> </outbound-socket-binding> </socket-binding-group>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jboss-web version=\"6.0\" xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_6_0.xsd\"> <replication-config> <replication-trigger>SET</replication-trigger> <replication-granularity>SESSION</replication-granularity> <cache-name>cacheContainer.default-cache</cache-name> </replication-config> <passivation-config> <use-session-passivation>true</use-session-passivation> <passivation-min-idle-time>900</passivation-min-idle-time> <passivation-max-idle-time>1800</passivation-max-idle-time> </passivation-config> </jboss-web>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-externalize_sessions |
Chapter 17. Airgapped environment | Chapter 17. Airgapped environment An air-gapped environment ensures security by physically isolating it from other networks and systems. You can install director Operator in an air-gapped environment to ensure security and provides certain regulatory requirements. 17.1. Prerequisites An operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12, 4.14, or 4.16. The cluster must contain a provisioning network, and the following Operators: A baremetal cluster Operator. The baremetal cluster Operator must be enabled. For more information on baremetal cluster Operators, see Bare-metal cluster Operators . OpenShift Virtualization Operator. For more information on installing the OpenShift Virtualization Operator, see Installing OpenShift Virtualization using the web console . SR-IOV Network Operator. You have a disconnected registry adhering to docker v2 schema. For more information, see Mirroring images for a disconnected installation . You have access to a Satellite server or any other repository used to register the overcloud nodes and install packages. You have access to a local git repository to store deployment artifacts. The following command line tools are installed on your workstation: podman skopeo oc jq 17.2. Configuring an airgapped environment To configure an airgapped environment, you must have access to both registry.redhat.io and the registry for airgapped environment. For more information on how to access both registries, see Mirroring catalog contents to airgapped registries . Procedure Create the openstack namespace: Create the index image and push it to your registry: Note You can get the latest bundle image from: Certified container images . Search for osp-director-operator-bundle . Retrieve the digest of the index image you created in the step: Mirror the relevant images based on the operator index image: After mirroring is complete, a manifests directory is generated in your current directory called manifests-osp-director-operator-index-<random_number> . Apply the created ImageContentSourcePolicy to your cluster: Replace <random_number> with the randomly generated number. Create a file named osp-director-operator.yaml and include the following YAML content to configure the three resources required to install director Operator: Create the new resources in the openstack namespace: Copy the required overcloud images to the repository: Note You can refer to Preparing a Satellite server for container images if Red Hat Satellite is used as the local registry. You can now proceed with Installing and preparing director Operator . Verification Confirm that you have successfully installed director Operator: Additional Resources Installing from OperatorHub using the CLI . Mirroring Operator catalogs for use with disconnected clusters . Mirroring catalog contents to airgapped registries . Preparing a Satellite server for container images . Obtaining container images from private registries | [
"oc new-project openstack",
"podman login registry.redhat.io podman login your.registry.local BUNDLE_IMG=\"registry.redhat.io/rhosp-rhel9/osp-director-operator-bundle@sha256:<bundle digest>\" INDEX_IMG=\"quay.io/<account>/osp-director-operator-index:x.y.z-a\" opm index add --bundles USD{BUNDLE_IMG} --tag USD{INDEX_IMG} -u podman --pull-tool podman",
"INDEX_DIGEST=\"USD(skopeo inspect docker://quay.io/<account>/osp-director-operator-index:x.y.z-a | jq '.Digest' -r)\"",
"oc adm catalog mirror quay.io/<account>/osp-director-operator-index@USD{INDEX_DIGEST} your.registry.local --insecure --index-filter-by-os='Linux/x86_64'",
"os apply -f manifests-osp-director-operator-index-<random_number>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: osp-director-operator-index namespace: openstack spec: sourceType: grpc image: your.registry.local/osp-director-operator-index:1.3.x-y --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: \"osp-director-operator-group\" namespace: openstack spec: targetNamespaces: - openstack --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: osp-director-operator-subscription namespace: openstack spec: config: env: - name: WATCH_NAMESPACE value: openstack,openshift-machine-api,openshift-sriov-network-operator source: osp-director-operator-index sourceNamespace: openstack name: osp-director-operator",
"oc apply -f osp-director-operator.yaml",
"for i in USD(podman search --limit 1000 \"registry.redhat.io/rhosp-rhel9/openstack\" --format=\"{{ .Name }}\" | awk '{print USD1 \":\" \"17.1.0\"}' | awk -F \"/\" '{print USD2 \"/\" USD3}'); do skopeo copy --all docker://registry.redhat.io/USDi docker://your.registry.local/USDi;done",
"oc get operators NAME AGE osp-director-operator.openstack 5m"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_airgapped-environment_change-resources-on-vms-using-ospdo |
Chapter 3. Manually scaling a machine set | Chapter 3. Manually scaling a machine set You can add or remove an instance of a machine in a machine set. Note If you need to modify aspects of a machine set outside of scaling, see Modifying a machine set . 3.1. Prerequisites If you enabled the cluster-wide proxy and scale up workers not included in networking.machineNetwork[].cidr from the installation configuration, you must add the workers to the Proxy object's noProxy field to prevent connection issues. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 3.2. Scaling a machine set manually To add or remove an instance of a machine in a machine set, you can manually scale the machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the machines that are in the cluster: USD oc get machine -n openshift-machine-api Set the annotation on the machine that you want to delete: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine: USD oc get machines 3.3. The machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/cluster-api-delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Note Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption. | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/manually-scaling-machineset |
Chapter 14. Deploy an AWS Lambda to disable a non-responding site | Chapter 14. Deploy an AWS Lambda to disable a non-responding site This chapter explains how to resolve split-brain scenarios between two sites in a multi-site deployment. It also disables replication if one site fails, so the other site can continue to serve requests. This deployment is intended to be used with the setup described in the Concepts for multi-site deployments chapter. Use this deployment with the other building blocks outlined in the Building blocks multi-site deployments chapter. Note We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization's standards and security best practices. 14.1. Architecture In the event of a network communication failure between sites in a multi-site deployment, it is no longer possible for the two sites to continue to replicate the data between them. The Data Grid is configured with a FAIL failure policy, which ensures consistency over availability. Consequently, all user requests are served with an error message until the failure is resolved, either by restoring the network connection or by disabling cross-site replication. In such scenarios, a quorum is commonly used to determine which sites are marked as online or offline. However, as multi-site deployments only consist of two sites, this is not possible. Instead, we leverage "fencing" to ensure that when one of the sites is unable to connect to the other site, only one site remains in the load balancer configuration, and hence only this site is able to serve subsequent users requests. In addition to the load balancer configuration, the fencing procedure disables replication between the two Data Grid clusters to allow serving user requests from the site that remains in the load balancer configuration. As a result, the sites will be out-of-sync once the replication has been disabled. To recover from the out-of-sync state, a manual re-sync is necessary as described in Synchronize Sites . This is why a site which is removed via fencing will not be re-added automatically when the network communication failure is resolved. The remove site should only be re-added once the two sites have been synchronized using the outlined procedure Bring site online . In this chapter we describe how to implement fencing using a combination of Prometheus Alerts and AWS Lambda functions. A Prometheus Alert is triggered when split-brain is detected by the Data Grid server metrics, which results in the Prometheus AlertManager calling the AWS Lambda based webhook. The triggered Lambda function inspects the current Global Accelerator configuration and removes the site reported to be offline. In a true split-brain scenario, where both sites are still up but network communication is down, it is possible that both sites will trigger the webhook simultaneously. We guard against this by ensuring that only a single Lambda instance can be executed at a given time. The logic in the AWS Lambda ensures that always one site entry remains in the load balancer configuration. 14.2. Prerequisites ROSA HCP based multi-site Keycloak deployment AWS CLI Installed AWS Global Accelerator load balancer jq tool installed 14.3. Procedure Enable Openshift user alert routing Command: oc apply -f - << EOF apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true enableAlertmanagerConfig: true EOF oc -n openshift-user-workload-monitoring rollout status --watch statefulset.apps/alertmanager-user-workload Decide upon a username/password combination which will be used to authenticate the Lambda webhook and create an AWS Secret storing the password Command: aws secretsmanager create-secret \ --name webhook-password \ 1 --secret-string changeme \ 2 --region eu-west-1 3 1 The name of the secret 2 The password to be used for authentication 3 The AWS region that hosts the secret Create the Role used to execute the Lambda. Command: FUNCTION_NAME= 1 ROLE_ARN=USD(aws iam create-role \ --role-name USD{FUNCTION_NAME} \ --assume-role-policy-document \ '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' \ --query 'Role.Arn' \ --region eu-west-1 \ 2 --output text ) 1 A name of your choice to associate with the Lambda and related resources 2 The AWS Region hosting your Kubernetes clusters Create and attach the 'LambdaSecretManager' Policy so that the Lambda can access AWS Secrets Command: POLICY_ARN=USD(aws iam create-policy \ --policy-name LambdaSecretManager \ --policy-document \ '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": "*" } ] }' \ --query 'Policy.Arn' \ --output text ) aws iam attach-role-policy \ --role-name USD{FUNCTION_NAME} \ --policy-arn USD{POLICY_ARN} Attach the ElasticLoadBalancingReadOnly policy so that the Lambda can query the provisioned Network Load Balancers Command: aws iam attach-role-policy \ --role-name USD{FUNCTION_NAME} \ --policy-arn arn:aws:iam::aws:policy/ElasticLoadBalancingReadOnly Attach the GlobalAcceleratorFullAccess policy so that the Lambda can update the Global Accelerator EndpointGroup Command: aws iam attach-role-policy \ --role-name USD{FUNCTION_NAME} \ --policy-arn arn:aws:iam::aws:policy/GlobalAcceleratorFullAccess Create a Lambda ZIP file containing the required fencing logic Command: LAMBDA_ZIP=/tmp/lambda.zip cat << EOF > /tmp/lambda.py from urllib.error import HTTPError import boto3 import jmespath import json import os import urllib3 from base64 import b64decode from urllib.parse import unquote # Prevent unverified HTTPS connection warning urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class MissingEnvironmentVariable(Exception): pass class MissingSiteUrl(Exception): pass def env(name): if name in os.environ: return os.environ[name] raise MissingEnvironmentVariable(f"Environment Variable '{name}' must be set") def handle_site_offline(labels): a_client = boto3.client('globalaccelerator', region_name='us-west-2') acceleratorDNS = labels['accelerator'] accelerator = jmespath.search(f"Accelerators[?(DnsName=='{acceleratorDNS}'|| DualStackDnsName=='{acceleratorDNS}')]", a_client.list_accelerators()) if not accelerator: print(f"Ignoring SiteOffline alert as accelerator with DnsName '{acceleratorDNS}' not found") return accelerator_arn = accelerator[0]['AcceleratorArn'] listener_arn = a_client.list_listeners(AcceleratorArn=accelerator_arn)['Listeners'][0]['ListenerArn'] endpoint_group = a_client.list_endpoint_groups(ListenerArn=listener_arn)['EndpointGroups'][0] endpoints = endpoint_group['EndpointDescriptions'] # Only update accelerator endpoints if two entries exist if len(endpoints) > 1: # If the reporter endpoint is not healthy then do nothing for now # A Lambda will eventually be triggered by the other offline site for this reporter reporter = labels['reporter'] reporter_endpoint = [e for e in endpoints if endpoint_belongs_to_site(e, reporter)][0] if reporter_endpoint['HealthState'] == 'UNHEALTHY': print(f"Ignoring SiteOffline alert as reporter '{reporter}' endpoint is marked UNHEALTHY") return offline_site = labels['site'] endpoints = [e for e in endpoints if not endpoint_belongs_to_site(e, offline_site)] del reporter_endpoint['HealthState'] a_client.update_endpoint_group( EndpointGroupArn=endpoint_group['EndpointGroupArn'], EndpointConfigurations=endpoints ) print(f"Removed site={offline_site} from Accelerator EndpointGroup") take_infinispan_site_offline(reporter, offline_site) print(f"Backup site={offline_site} caches taken offline") else: print("Ignoring SiteOffline alert only one Endpoint defined in the EndpointGroup") def endpoint_belongs_to_site(endpoint, site): lb_arn = endpoint['EndpointId'] region = lb_arn.split(':')[3] client = boto3.client('elbv2', region_name=region) tags = client.describe_tags(ResourceArns=[lb_arn])['TagDescriptions'][0]['Tags'] for tag in tags: if tag['Key'] == 'site': return tag['Value'] == site return false def take_infinispan_site_offline(reporter, offlinesite): endpoints = json.loads(INFINISPAN_SITE_ENDPOINTS) if reporter not in endpoints: raise MissingSiteUrl(f"Missing URL for site '{reporter}' in 'INFINISPAN_SITE_ENDPOINTS' json") endpoint = endpoints[reporter] password = get_secret(INFINISPAN_USER_SECRET) url = f"https://{endpoint}/rest/v2/container/x-site/backups/{offlinesite}?action=take-offline" http = urllib3.PoolManager(cert_reqs='CERT_NONE') headers = urllib3.make_headers(basic_auth=f"{INFINISPAN_USER}:{password}") try: rsp = http.request("POST", url, headers=headers) if rsp.status >= 400: raise HTTPError(f"Unexpected response status '%d' when taking site offline", rsp.status) rsp.release_conn() except HTTPError as e: print(f"HTTP error encountered: {e}") def get_secret(secret_name): session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=SECRETS_REGION ) return client.get_secret_value(SecretId=secret_name)['SecretString'] def decode_basic_auth_header(encoded_str): split = encoded_str.strip().split(' ') if len(split) == 2: if split[0].strip().lower() == 'basic': try: username, password = b64decode(split[1]).decode().split(':', 1) except: raise DecodeError else: raise DecodeError else: raise DecodeError return unquote(username), unquote(password) def handler(event, context): print(json.dumps(event)) authorization = event['headers'].get('authorization') if authorization is None: print("'Authorization' header missing from request") return { "statusCode": 401 } expectedPass = get_secret(WEBHOOK_USER_SECRET) username, password = decode_basic_auth_header(authorization) if username != WEBHOOK_USER and password != expectedPass: print('Invalid username/password combination') return { "statusCode": 403 } body = event.get('body') if body is None: raise Exception('Empty request body') body = json.loads(body) print(json.dumps(body)) if body['status'] != 'firing': print("Ignoring alert as status is not 'firing', status was: '%s'" % body['status']) return { "statusCode": 204 } for alert in body['alerts']: labels = alert['labels'] if labels['alertname'] == 'SiteOffline': handle_site_offline(labels) return { "statusCode": 204 } INFINISPAN_USER = env('INFINISPAN_USER') INFINISPAN_USER_SECRET = env('INFINISPAN_USER_SECRET') INFINISPAN_SITE_ENDPOINTS = env('INFINISPAN_SITE_ENDPOINTS') SECRETS_REGION = env('SECRETS_REGION') WEBHOOK_USER = env('WEBHOOK_USER') WEBHOOK_USER_SECRET = env('WEBHOOK_USER_SECRET') EOF zip -FS --junk-paths USD{LAMBDA_ZIP} /tmp/lambda.py Create the Lambda function. Command: aws lambda create-function \ --function-name USD{FUNCTION_NAME} \ --zip-file fileb://USD{LAMBDA_ZIP} \ --handler lambda.handler \ --runtime python3.12 \ --role USD{ROLE_ARN} \ --region eu-west-1 1 1 The AWS Region hosting your Kubernetes clusters Expose a Function URL so the Lambda can be triggered as webhook Command: aws lambda create-function-url-config \ --function-name USD{FUNCTION_NAME} \ --auth-type NONE \ --region eu-west-1 1 1 The AWS Region hosting your Kubernetes clusters Allow public invocations of the Function URL Command: aws lambda add-permission \ --action "lambda:InvokeFunctionUrl" \ --function-name USD{FUNCTION_NAME} \ --principal "*" \ --statement-id FunctionURLAllowPublicAccess \ --function-url-auth-type NONE \ --region eu-west-1 1 1 The AWS Region hosting your Kubernetes clusters Configure the Lambda's Environment variables: In each Kubernetes cluster, retrieve the exposed Data Grid URL endpoint: oc -n USD{NAMESPACE} get route infinispan-external -o jsonpath='{.status.ingress[].host}' 1 1 Replace USD{NAMESPACE} with the namespace containing your Data Grid server Upload the desired Environment variables ACCELERATOR_NAME= 1 LAMBDA_REGION= 2 CLUSTER_1_NAME= 3 CLUSTER_1_ISPN_ENDPOINT= 4 CLUSTER_2_NAME= 5 CLUSTER_2_ISPN_ENDPOINT= 6 INFINISPAN_USER= 7 INFINISPAN_USER_SECRET= 8 WEBHOOK_USER= 9 WEBHOOK_USER_SECRET= 10 INFINISPAN_SITE_ENDPOINTS=USD(echo "{\"USD{CLUSTER_NAME_1}\":\"USD{CLUSTER_1_ISPN_ENDPOINT}\",\"USD{CLUSTER_2_NAME}\":\"USD{CLUSTER_2_ISPN_ENDPOINT\"}" | jq tostring) aws lambda update-function-configuration \ --function-name USD{ACCELERATOR_NAME} \ --region USD{LAMBDA_REGION} \ --environment "{ \"Variables\": { \"INFINISPAN_USER\" : \"USD{INFINISPAN_USER}\", \"INFINISPAN_USER_SECRET\" : \"USD{INFINISPAN_USER_SECRET}\", \"INFINISPAN_SITE_ENDPOINTS\" : USD{INFINISPAN_SITE_ENDPOINTS}, \"WEBHOOK_USER\" : \"USD{WEBHOOK_USER}\", \"WEBHOOK_USER_SECRET\" : \"USD{WEBHOOK_USER_SECERT}\", \"SECRETS_REGION\" : \"eu-central-1\" } }" 1 The name of the AWS Global Accelerator used by your deployment 2 The AWS Region hosting your Kubernetes cluster and Lambda function 3 The name of one of your Data Grid sites as defined in Deploy Data Grid for HA with the Data Grid Operator 4 The Data Grid endpoint URL associated with the CLUSER_1_NAME site 5 The name of the second Data Grid site 6 The Data Grid endpoint URL associated with the CLUSER_2_NAME site 7 The username of a Data Grid user which has sufficient privileges to perform REST requests on the server 8 The name of the AWS secret containing the password associated with the Data Grid user 9 The username used to authenticate requests to the Lambda Function 10 The name of the AWS secret containing the password used to authenticate requests to the Lambda function Retrieve the Lambda Function URL Command: aws lambda get-function-url-config \ --function-name USD{FUNCTION_NAME} \ --query "FunctionUrl" \ --region eu-west-1 \ 1 --output text 1 The AWS region where the Lambda was created Output: https://tjqr2vgc664b6noj6vugprakoq0oausj.lambda-url.eu-west-1.on.aws In each Kubernetes cluster, configure a Prometheus Alert routing to trigger the Lambda on split-brain Command: NAMESPACE= # The namespace containing your deployments oc apply -n USD{NAMESPACE} -f - << EOF apiVersion: v1 kind: Secret type: kubernetes.io/basic-auth metadata: name: webhook-credentials stringData: username: 'keycloak' 1 password: 'changme' 2 --- apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing spec: route: receiver: default groupBy: - accelerator groupInterval: 90s groupWait: 60s matchers: - matchType: = name: alertname value: SiteOffline receivers: - name: default webhookConfigs: - url: 'https://tjqr2vgc664b6noj6vugprakoq0oausj.lambda-url.eu-west-1.on.aws/' 3 httpConfig: basicAuth: username: key: username name: webhook-credentials password: key: password name: webhook-credentials tlsConfig: insecureSkipVerify: true --- apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: xsite-status spec: groups: - name: xsite-status rules: - alert: SiteOffline expr: 'min by (namespace, site) (vendor_jgroups_site_view_status{namespace="default",site="site-b"}) == 0' 4 labels: severity: critical reporter: site-a 5 accelerator: a3da6a6cbd4e27b02.awsglobalaccelerator.com 6 1 The username required to authenticate Lambda requests 2 The password required to authenticate Lambda requests 3 The Lambda Function URL 4 The namespace value should be the namespace hosting the Infinispan CR and the site should be the remote site defined by spec.service.sites.locations[0].name in your Infinispan CR 5 The name of your local site defined by spec.service.sites.local.name in your Infinispan CR 6 The DNS of your Global Accelerator 14.4. Verify To test that the Prometheus alert triggers the webhook as expected, perform the following steps to simulate a split-brain: In each of your clusters execute the following: Command: oc -n openshift-operators scale --replicas=0 deployment/infinispan-operator-controller-manager 1 oc -n openshift-operators rollout status -w deployment/infinispan-operator-controller-manager oc -n USD{NAMESPACE} scale --replicas=0 deployment/infinispan-router 2 oc -n USD{NAMESPACE} rollout status -w deployment/infinispan-router 1 Scale down the Data Grid Operator so that the step does not result in the deployment being recreated by the operator 2 Scale down the Gossip Router deployment.Replace USD{NAMESPACE} with the namespace containing your Data Grid server Verify the SiteOffline event has been fired on a cluster by inspecting the Observe Alerting menu in the Openshift console Inspect the Global Accelerator EndpointGroup in the AWS console and there should only be a single endpoint present Scale up the Data Grid Operator and Gossip Router to re-establish a connection between sites: Command: oc -n openshift-operators scale --replicas=1 deployment/infinispan-operator-controller-manager oc -n openshift-operators rollout status -w deployment/infinispan-operator-controller-manager oc -n USD{NAMESPACE} scale --replicas=1 deployment/infinispan-router 1 oc -n USD{NAMESPACE} rollout status -w deployment/infinispan-router 1 Replace USD{NAMESPACE} with the namespace containing your Data Grid server Inspect the vendor_jgroups_site_view_status metric in each site. A value of 1 indicates that the site is reachable. Update the Accelerator EndpointGroup to contain both Endpoints. See the Bring site online chapter for details. 14.5. Further reading Bring site online Take site offline | [
"apply -f - << EOF apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true enableAlertmanagerConfig: true EOF -n openshift-user-workload-monitoring rollout status --watch statefulset.apps/alertmanager-user-workload",
"aws secretsmanager create-secret --name webhook-password \\ 1 --secret-string changeme \\ 2 --region eu-west-1 3",
"FUNCTION_NAME= 1 ROLE_ARN=USD(aws iam create-role --role-name USD{FUNCTION_NAME} --assume-role-policy-document '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"lambda.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" } ] }' --query 'Role.Arn' --region eu-west-1 \\ 2 --output text )",
"POLICY_ARN=USD(aws iam create-policy --policy-name LambdaSecretManager --policy-document '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\" ], \"Resource\": \"*\" } ] }' --query 'Policy.Arn' --output text ) aws iam attach-role-policy --role-name USD{FUNCTION_NAME} --policy-arn USD{POLICY_ARN}",
"aws iam attach-role-policy --role-name USD{FUNCTION_NAME} --policy-arn arn:aws:iam::aws:policy/ElasticLoadBalancingReadOnly",
"aws iam attach-role-policy --role-name USD{FUNCTION_NAME} --policy-arn arn:aws:iam::aws:policy/GlobalAcceleratorFullAccess",
"LAMBDA_ZIP=/tmp/lambda.zip cat << EOF > /tmp/lambda.py from urllib.error import HTTPError import boto3 import jmespath import json import os import urllib3 from base64 import b64decode from urllib.parse import unquote Prevent unverified HTTPS connection warning urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class MissingEnvironmentVariable(Exception): pass class MissingSiteUrl(Exception): pass def env(name): if name in os.environ: return os.environ[name] raise MissingEnvironmentVariable(f\"Environment Variable '{name}' must be set\") def handle_site_offline(labels): a_client = boto3.client('globalaccelerator', region_name='us-west-2') acceleratorDNS = labels['accelerator'] accelerator = jmespath.search(f\"Accelerators[?(DnsName=='{acceleratorDNS}'|| DualStackDnsName=='{acceleratorDNS}')]\", a_client.list_accelerators()) if not accelerator: print(f\"Ignoring SiteOffline alert as accelerator with DnsName '{acceleratorDNS}' not found\") return accelerator_arn = accelerator[0]['AcceleratorArn'] listener_arn = a_client.list_listeners(AcceleratorArn=accelerator_arn)['Listeners'][0]['ListenerArn'] endpoint_group = a_client.list_endpoint_groups(ListenerArn=listener_arn)['EndpointGroups'][0] endpoints = endpoint_group['EndpointDescriptions'] # Only update accelerator endpoints if two entries exist if len(endpoints) > 1: # If the reporter endpoint is not healthy then do nothing for now # A Lambda will eventually be triggered by the other offline site for this reporter reporter = labels['reporter'] reporter_endpoint = [e for e in endpoints if endpoint_belongs_to_site(e, reporter)][0] if reporter_endpoint['HealthState'] == 'UNHEALTHY': print(f\"Ignoring SiteOffline alert as reporter '{reporter}' endpoint is marked UNHEALTHY\") return offline_site = labels['site'] endpoints = [e for e in endpoints if not endpoint_belongs_to_site(e, offline_site)] del reporter_endpoint['HealthState'] a_client.update_endpoint_group( EndpointGroupArn=endpoint_group['EndpointGroupArn'], EndpointConfigurations=endpoints ) print(f\"Removed site={offline_site} from Accelerator EndpointGroup\") take_infinispan_site_offline(reporter, offline_site) print(f\"Backup site={offline_site} caches taken offline\") else: print(\"Ignoring SiteOffline alert only one Endpoint defined in the EndpointGroup\") def endpoint_belongs_to_site(endpoint, site): lb_arn = endpoint['EndpointId'] region = lb_arn.split(':')[3] client = boto3.client('elbv2', region_name=region) tags = client.describe_tags(ResourceArns=[lb_arn])['TagDescriptions'][0]['Tags'] for tag in tags: if tag['Key'] == 'site': return tag['Value'] == site return false def take_infinispan_site_offline(reporter, offlinesite): endpoints = json.loads(INFINISPAN_SITE_ENDPOINTS) if reporter not in endpoints: raise MissingSiteUrl(f\"Missing URL for site '{reporter}' in 'INFINISPAN_SITE_ENDPOINTS' json\") endpoint = endpoints[reporter] password = get_secret(INFINISPAN_USER_SECRET) url = f\"https://{endpoint}/rest/v2/container/x-site/backups/{offlinesite}?action=take-offline\" http = urllib3.PoolManager(cert_reqs='CERT_NONE') headers = urllib3.make_headers(basic_auth=f\"{INFINISPAN_USER}:{password}\") try: rsp = http.request(\"POST\", url, headers=headers) if rsp.status >= 400: raise HTTPError(f\"Unexpected response status '%d' when taking site offline\", rsp.status) rsp.release_conn() except HTTPError as e: print(f\"HTTP error encountered: {e}\") def get_secret(secret_name): session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=SECRETS_REGION ) return client.get_secret_value(SecretId=secret_name)['SecretString'] def decode_basic_auth_header(encoded_str): split = encoded_str.strip().split(' ') if len(split) == 2: if split[0].strip().lower() == 'basic': try: username, password = b64decode(split[1]).decode().split(':', 1) except: raise DecodeError else: raise DecodeError else: raise DecodeError return unquote(username), unquote(password) def handler(event, context): print(json.dumps(event)) authorization = event['headers'].get('authorization') if authorization is None: print(\"'Authorization' header missing from request\") return { \"statusCode\": 401 } expectedPass = get_secret(WEBHOOK_USER_SECRET) username, password = decode_basic_auth_header(authorization) if username != WEBHOOK_USER and password != expectedPass: print('Invalid username/password combination') return { \"statusCode\": 403 } body = event.get('body') if body is None: raise Exception('Empty request body') body = json.loads(body) print(json.dumps(body)) if body['status'] != 'firing': print(\"Ignoring alert as status is not 'firing', status was: '%s'\" % body['status']) return { \"statusCode\": 204 } for alert in body['alerts']: labels = alert['labels'] if labels['alertname'] == 'SiteOffline': handle_site_offline(labels) return { \"statusCode\": 204 } INFINISPAN_USER = env('INFINISPAN_USER') INFINISPAN_USER_SECRET = env('INFINISPAN_USER_SECRET') INFINISPAN_SITE_ENDPOINTS = env('INFINISPAN_SITE_ENDPOINTS') SECRETS_REGION = env('SECRETS_REGION') WEBHOOK_USER = env('WEBHOOK_USER') WEBHOOK_USER_SECRET = env('WEBHOOK_USER_SECRET') EOF zip -FS --junk-paths USD{LAMBDA_ZIP} /tmp/lambda.py",
"aws lambda create-function --function-name USD{FUNCTION_NAME} --zip-file fileb://USD{LAMBDA_ZIP} --handler lambda.handler --runtime python3.12 --role USD{ROLE_ARN} --region eu-west-1 1",
"aws lambda create-function-url-config --function-name USD{FUNCTION_NAME} --auth-type NONE --region eu-west-1 1",
"aws lambda add-permission --action \"lambda:InvokeFunctionUrl\" --function-name USD{FUNCTION_NAME} --principal \"*\" --statement-id FunctionURLAllowPublicAccess --function-url-auth-type NONE --region eu-west-1 1",
"-n USD{NAMESPACE} get route infinispan-external -o jsonpath='{.status.ingress[].host}' 1",
"ACCELERATOR_NAME= 1 LAMBDA_REGION= 2 CLUSTER_1_NAME= 3 CLUSTER_1_ISPN_ENDPOINT= 4 CLUSTER_2_NAME= 5 CLUSTER_2_ISPN_ENDPOINT= 6 INFINISPAN_USER= 7 INFINISPAN_USER_SECRET= 8 WEBHOOK_USER= 9 WEBHOOK_USER_SECRET= 10 INFINISPAN_SITE_ENDPOINTS=USD(echo \"{\\\"USD{CLUSTER_NAME_1}\\\":\\\"USD{CLUSTER_1_ISPN_ENDPOINT}\\\",\\\"USD{CLUSTER_2_NAME}\\\":\\\"USD{CLUSTER_2_ISPN_ENDPOINT\\\"}\" | jq tostring) aws lambda update-function-configuration --function-name USD{ACCELERATOR_NAME} --region USD{LAMBDA_REGION} --environment \"{ \\\"Variables\\\": { \\\"INFINISPAN_USER\\\" : \\\"USD{INFINISPAN_USER}\\\", \\\"INFINISPAN_USER_SECRET\\\" : \\\"USD{INFINISPAN_USER_SECRET}\\\", \\\"INFINISPAN_SITE_ENDPOINTS\\\" : USD{INFINISPAN_SITE_ENDPOINTS}, \\\"WEBHOOK_USER\\\" : \\\"USD{WEBHOOK_USER}\\\", \\\"WEBHOOK_USER_SECRET\\\" : \\\"USD{WEBHOOK_USER_SECERT}\\\", \\\"SECRETS_REGION\\\" : \\\"eu-central-1\\\" } }\"",
"aws lambda get-function-url-config --function-name USD{FUNCTION_NAME} --query \"FunctionUrl\" --region eu-west-1 \\ 1 --output text",
"https://tjqr2vgc664b6noj6vugprakoq0oausj.lambda-url.eu-west-1.on.aws",
"NAMESPACE= # The namespace containing your deployments apply -n USD{NAMESPACE} -f - << EOF apiVersion: v1 kind: Secret type: kubernetes.io/basic-auth metadata: name: webhook-credentials stringData: username: 'keycloak' 1 password: 'changme' 2 --- apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing spec: route: receiver: default groupBy: - accelerator groupInterval: 90s groupWait: 60s matchers: - matchType: = name: alertname value: SiteOffline receivers: - name: default webhookConfigs: - url: 'https://tjqr2vgc664b6noj6vugprakoq0oausj.lambda-url.eu-west-1.on.aws/' 3 httpConfig: basicAuth: username: key: username name: webhook-credentials password: key: password name: webhook-credentials tlsConfig: insecureSkipVerify: true --- apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: xsite-status spec: groups: - name: xsite-status rules: - alert: SiteOffline expr: 'min by (namespace, site) (vendor_jgroups_site_view_status{namespace=\"default\",site=\"site-b\"}) == 0' 4 labels: severity: critical reporter: site-a 5 accelerator: a3da6a6cbd4e27b02.awsglobalaccelerator.com 6",
"-n openshift-operators scale --replicas=0 deployment/infinispan-operator-controller-manager 1 -n openshift-operators rollout status -w deployment/infinispan-operator-controller-manager -n USD{NAMESPACE} scale --replicas=0 deployment/infinispan-router 2 -n USD{NAMESPACE} rollout status -w deployment/infinispan-router",
"-n openshift-operators scale --replicas=1 deployment/infinispan-operator-controller-manager -n openshift-operators rollout status -w deployment/infinispan-operator-controller-manager -n USD{NAMESPACE} scale --replicas=1 deployment/infinispan-router 1 -n USD{NAMESPACE} rollout status -w deployment/infinispan-router"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/deploy-aws-accelerator-fencing-lambda- |
12.3. Adding Host Entries | 12.3. Adding Host Entries 12.3.1. Adding Host Entries from the Web UI Open the Identity tab, and select the Hosts subtab. Click Add at the top of the hosts list. Figure 12.1. Adding Host Entries Fill in the machine name and select the domain from the configured zones in the drop-down list. If the host has already been assigned a static IP address, then include that with the host entry so that the DNS entry is fully created. Optionally, to add an extra value to the host for some use cases, use the Class field. Semantics placed on this attribute are for local interpretation. Figure 12.2. Add Host Wizard DNS zones can be created in IdM, which is described in Section 33.4.1, "Adding and Removing Master DNS Zones" . If the IdM server does not manage the DNS server, the zone can be entered manually in the menu area, like a regular text field. Note Select the Force check box if you want to skip checking whether the host is resolvable via DNS. Click the Add and Edit button to go directly to the expanded entry page and fill in more attribute information. Information about the host hardware and physical location can be included with the host entry. Figure 12.3. Expanded Entry Page 12.3.2. Adding Host Entries from the Command Line Host entries are created using the host-add command. This commands adds the host entry to the IdM Directory Server. The full list of options with host-add are listed in the ipa host manpage. At its most basic, an add operation only requires the client host name to add the client to the Kerberos realm and to create an entry in the IdM LDAP server: If the IdM server is configured to manage DNS, then the host can also be added to the DNS resource records using the --ip-address and --force options. Example 12.1. Creating Host Entries with Static IP Addresses Commonly, hosts may not have a static IP address or the IP address may not be known at the time the client is configured. For example, laptops may be preconfigured as Identity Management clients, but they do not have IP addresses at the time they are configured. Hosts which use DHCP can still be configured with a DNS entry by using --force . This essentially creates a placeholder entry in the IdM DNS service. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Example 12.2. Creating Host Entries with DHCP Host records are deleted using the host-del command. If the IdM domain uses DNS, then the --updatedns option also removes the associated records of any kind for the host from the DNS. | [
"ipa host-add client1.example.com",
"ipa host-add --force --ip-address=192.168.166.31 client1.example.com",
"ipa host-add --force client1.example.com",
"ipa host-del --updatedns client1.example.com"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-host-entry |
Chapter 9. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster | Chapter 9. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a cluster. 9.1. Support for Stream Control Transmission Protocol (SCTP) on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 9.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 9.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 9.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the Internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp | [
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/using-sctp |
Installing | Installing Red Hat Enterprise Linux AI 1.2 Installation documentation on various platforms Red Hat RHEL AI Documentation Team | [
"use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot",
"mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso",
"customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot",
"mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train",
"export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000",
"aws s3 mb s3://USDBUCKET",
"printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json",
"curl -Lo disk.raw <link-to-raw-file>",
"aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI",
"printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json",
"task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)",
"aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active",
"snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId==\"'USD{task_id}'\") | .SnapshotTaskDetail.SnapshotId')",
"aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)",
"aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"aws ec2 describe-images --owners self",
"aws ec2 describe-security-groups",
"aws ec2 describe-subnets",
"instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>",
"aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train",
"ibmcloud login",
"ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'",
"ibmcloud plugin install cloud-object-storage infrastructure-service",
"ibmcloud target -g Default",
"ibmcloud target -r us-east",
"ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'",
"cos_deploy_plan=premium-global-deployment",
"cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE",
"ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}",
"cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')",
"ibmcloud cos config crn --crn USD{cos_crn} --force",
"bucket_name=NAME_OF_MY_BUCKET",
"ibmcloud cos bucket-create --bucket USD{bucket_name}",
"cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')",
"ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}",
"curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"",
"image_name=rhel-ai-20240703v0",
"ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>",
"ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol",
"image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')",
"while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done",
"ibmcloud is image USD{image_id}",
"ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>",
"ibmcloud plugin install infrastructure-service",
"ssh-keygen -f ibmcloud -t ed25519",
"ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519",
"ibmcloud is floating-ip-reserve my-public-ip --zone <region>",
"ibmcloud is instance-profiles",
"name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250",
"ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false",
"ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train",
"name=my-rhelai-instance",
"data_volume_size=1000",
"ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}",
"lsblk",
"disk=/dev/vdb",
"sgdisk -n 1:0:0 USDdisk",
"mkfs.xfs -L ilab-data USD{disk}1",
"echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab",
"systemctl daemon-reload",
"mount -a",
"chmod 1777 /mnt/",
"echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile",
"source USDHOME/.bash_profile",
"gcloud auth login",
"gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID",
"gcloud_project=your-gcloud-project gcloud config set project USDgcloud_project",
"gcloud_region=us-central1",
"gcloud_bucket=name-for-your-bucket gsutil mb -l USDgcloud_region gs://USDgcloud_bucket",
"FROM registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2 RUN eval USD(grep VERSION_ID /etc/os-release) && echo -e \"[google-compute-engine]\\nname=Google Compute Engine\\nbaseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-elUSD{VERSION_ID/.*}-x86_64-stable\\nenabled=1\\ngpgcheck=1\\nrepo_gpgcheck=0\\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg\\n https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\" > /etc/yum.repos.d/google-cloud.repo && dnf install -y --nobest acpid cloud-init google-compute-engine google-osconfig-agent langpacks-en rng-tools timedatex tuned vim && curl -sSo /tmp/add-google-cloud-ops-agent-repo.sh https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh && bash /tmp/add-google-cloud-ops-agent-repo.sh --also-install --remove-repo && rm /tmp/add-google-cloud-ops-agent-repo.sh && mkdir -p /var/lib/rpm-state && dnf remove -y irqbalance microcode_ctl && rmdir /var/lib/rpm-state && rm -f /etc/yum.repos.d/google-cloud.repo && sed -i -e '/^pool /c\\server metadata.google.internal iburst' /etc/chrony.conf && echo -e 'PermitRootLogin no\\nPasswordAuthentication no\\nClientAliveInterval 420' >> /etc/ssh/sshd_config && echo -e '[InstanceSetup]\\nset_boto_config = false' > /etc/default/instance_configs.cfg && echo 'blacklist floppy' > /etc/modprobe.d/blacklist_floppy.conf && echo -e '[install]\\nkargs = [\"net.ifnames=0\", \"biosdevname=0\", \"scsi_mod.use_blk_mq=Y\", \"console=ttyS0,38400n8d\", \"cloud-init=disabled\"]' > /usr/lib/bootc/install/05-cloud-kargs.toml",
"GCP_BOOTC_IMAGE=quay.io/yourquayusername/bootc-nvidia-rhel9-gcp podman build --file Containerfile --tag USD{GCP_BOOTC_IMAGE} .",
"[customizations.kernel] name = \"gcp\" append = \"net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y console=ttyS0,38400n8d cloud-init=disabled\"",
"mkdir -p build/store build/output podman run --rm -ti --privileged --pull newer -v /var/lib/containers/storage:/var/lib/containers/storage -v ./build/store:/store -v ./build/output:/output -v ./config.toml:/config.toml quay.io/centos-bootc/bootc-image-builder --config /config.toml --chown 0:0 --local --type raw --target-arch x86_64 USD{GCP_BOOTC_IMAGE}",
"image_name=rhel-ai-1-2",
"raw_file=<path-to-raw-file> tar cf rhelai_gcp.tar.gz --transform \"s|USDraw_file|disk.raw|\" --use-compress-program=pigz \"USDraw_file\"",
"gsutil cp rhelai_gcp.tar.gz \"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\"",
"gcloud compute images create \"USDimage_name\" --source-uri=\"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\" --family \"rhel-ai\" --guest-os-features=GVNIC",
"gcloud auth login",
"gcloud compute machine-types list --zones=<zone>",
"name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator=\"type=nvidia-h100-80gb,count=8\" image=my-custom-rhelai-image disk_size=1024 subnet=default",
"gcloud config set compute/zone USDzone",
"gcloud compute instances create USD{name} --machine-type USD{machine_type} --image USDimage --zone USDzone --subnet USDsubnet --boot-disk-size USD{disk_size} --boot-disk-device-name USD{name} --accelerator=USDaccelerator",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train",
"az login",
"az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"c7b976df-89ce-42ec-b3b2-a6b35fd9c0be\", \"id\": \"79d7df51-39ec-48b9-a15e-dcf59043c84e\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Team Name\", \"state\": \"Enabled\", \"tenantId\": \"0a873aea-428f-47bd-9120-73ce0c5cc1da\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"keyctl new_session azcopy login",
"az_location=eastus",
"az_resource_group=Default az group create --name USD{az_resource_group} --location USD{az_location}",
"az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT",
"az storage account create --name USD{az_storage_account} --resource-group USD{az_resource_group} --location USD{az_location} --sku Standard_LRS",
"az_storage_container=NAME_OF_MY_BUCKET az storage container create --name USD{az_storage_container} --account-name USD{az_storage_account} --public-access off",
"az account list --output table",
"az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681",
"az role assignment create --assignee [email protected] --role \"Storage Blob Data Contributor\" --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container}",
"image_name=rhel-ai-1.2",
"az_vhd_url=\"https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})\" azcopy copy \"USDvhd_file\" \"USDaz_vhd_url\"",
"az image create --resource-group USDaz_resource_group --name \"USDimage_name\" --source \"USD{az_vhd_url}\" --location USD{az_location} --os-type Linux --hyper-v-generation V2",
"az login",
"az vm list-sizes --location <region> --output table",
"name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024",
"az vm create --resource-group USDaz_resource_group --name USD{name} --image USD{az_image} --size USD{az_vm_size} --location USD{az_location} --admin-username USD{az_admin_username} --ssh-key-values @USDsshpubkey --authentication-type ssh --nic-delete-option Delete --accelerated-networking true --os-disk-size-gb 1024 --os-disk-name USD{name}-USD{az_location}",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/installing/installing_overview |
Chapter 4. Distribution of content | Chapter 4. Distribution of content RHEL 8 for SAP Solutions is installed using ISO images. For more information, see Installing RHEL 8 for SAP Solutions . For information on RHEL for SAP Solutions offerings on Certified Cloud Providers, see SAP Offerings on Certified Cloud Providers . Installation Steps for Red Hat Enterprise Linux for SAP Solutions After downloading, perform your installation of Red Hat Enterprise Linux . Register and attach your server to a repository source - either a local Red Hat Satellite instance or the Customer Portal Subscription Management service. Apply the release lock and activate the SAP repositories in the Red Hat subscription manager to get access to the additional packages provided by the Red Hat Enterprise Linux for SAP Solutions subscription. Execute the Red Hat Enterprise Linux system roles for SAP to automatically perform all required OS preconfiguration tasks to get started with the SAP workload installation afterwards. When your Red Hat Enterprise Linux for SAP Solutions system is ready, you can start your SAP installation , for example SAP HANA Express Edition. Get predictive IT analytics with connecting your system to Red Hat Insights. This is included with your subscription. If you need help installing your product, contact Red Hat Customer Service or Technical Support . SAP specific content is available on separate SAP repositories and ISOs and only for SAP-supported architectures (Intel x86_64, IBM Power LE). See How to subscribe SAP HANA systems to the Update Services for SAP Solutions . Additional resources Performing a standard RHEL installation Package manifest Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/distribution-of-content_8.x_release_notes |
4.2. Expiration Operations | 4.2. Expiration Operations Expiration in Red Hat JBoss Data Grid allows you to set a life span or maximum idle time value for each key/value pair stored in the cache. The life span or maximum idle time can be set to apply cache-wide or defined for each key/value pair using the cache API. The life span ( lifespan ) or maximum idle time ( maxIdle in Library Mode and max-idle in Remote Client-Server Mode) defined for an individual key/value pair overrides the cache-wide default for the entry in question. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/expiration_operations |
8.11. busybox | 8.11. busybox 8.11.1. RHSA-2013:1732 - Low: busybox security and bug fix update Updated busybox packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. BusyBox provides a single binary that includes versions of a large number of system commands, including a shell. This can be very useful for recovering from certain types of system failures, particularly those involving broken shared libraries. Security Fix CVE-2013-1813 It was found that the mdev BusyBox utility could create certain directories within /dev with world-writable permissions. A local unprivileged user could use this flaw to manipulate portions of the /dev directory tree. Bug Fixes BZ# 820097 Previously, due to a too eager string size optimization on the IBM System z architecture, the "wc" BusyBox command failed after processing standard input with the following error: wc: : No such file or directory This bug was fixed by disabling the string size optimization and the "wc" command works properly on IBM System z architectures. BZ# 859817 Prior to this update, the "mknod" command was unable to create device nodes with a major or minor number larger than 255. Consequently, the kdump utility failed to handle such a device. The underlying source code has been modified, and it is now possible to use the "mknod" command to create device nodes with a major or minor number larger than 255. BZ# 855832 If a network installation from an NFS server was selected, the "mount" command used the UDP protocol by default. If only TCP mounts were supported by the server, this led to a failure of the mount command. As a result, Anaconda could not continue with the installation. This bug is now fixed and NFS mount operations default to the TCP protocol. All busybox users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/busybox |
8.3.8. Allowing Access: audit2allow | 8.3.8. Allowing Access: audit2allow Do not use the example in this section in production. It is used only to demonstrate the use of the audit2allow utility. From the audit2allow (1) manual page: " audit2allow - generate SELinux policy allow rules from logs of denied operations" [16] . After analyzing denials as per Section 8.3.7, "sealert Messages" , and if no label changes or Booleans allowed access, use audit2allow to create a local policy module. After access is denied by SELinux, running the audit2allow command presents Type Enforcement rules that allow the previously denied access. The following example demonstrates using audit2allow to create a policy module: A denial and the associated system call are logged to /var/log/audit/audit.log : In this example, certwatch ( comm="certwatch" ) was denied write access ( { write } ) to a directory labeled with the var_t type ( tcontext=system_u:object_r:var_t:s0 ). Analyze the denial as per Section 8.3.7, "sealert Messages" . If no label changes or Booleans allowed access, use audit2allow to create a local policy module. With a denial logged, such as the certwatch denial in step 1, run the audit2allow -w -a command to produce a human-readable description of why access was denied. The -a option causes all audit logs to be read. The -w option produces the human-readable description. The audit2allow utility accesses /var/log/audit/audit.log , and as such, must be run as the Linux root user: As shown, access was denied due to a missing Type Enforcement rule. Run the audit2allow -a command to view the Type Enforcement rule that allows the denied access: Important Missing Type Enforcement rules are usually caused by bugs in SELinux policy, and should be reported in Red Hat Bugzilla . For Red Hat Enterprise Linux, create bugs against the Red Hat Enterprise Linux product, and select the selinux-policy component. Include the output of the audit2allow -w -a and audit2allow -a commands in such bug reports. To use the rule displayed by audit2allow -a , run the audit2allow -a -M mycertwatch command as the Linux root user to create custom module. The -M option creates a Type Enforcement file ( .te ) with the name specified with -M , in your current working directory: Also, audit2allow compiles the Type Enforcement rule into a policy package ( .pp ). To install the module, run the semodule -i mycertwatch.pp command as the Linux root user. Important Modules created with audit2allow may allow more access than required. It is recommended that policy created with audit2allow be posted to an SELinux list, such as fedora-selinux-list , for review. If you believe their is a bug in policy, create a bug in Red Hat Bugzilla . If you have multiple denials from multiple processes, but only want to create a custom policy for a single process, use the grep command to narrow down the input for audit2allow . The following example demonstrates using grep to only send denials related to certwatch through audit2allow : Refer to Dan Walsh's "Using audit2allow to build policy modules. Revisited." blog entry for further information about using audit2allow to build policy modules. [16] From the audit2allow (1) manual page, which is available when the policycoreutils-sandbox package in Red Hat Enterprise Linux 6 is installed. | [
"type=AVC msg=audit(1226270358.848:238): avc: denied { write } for pid=13349 comm=\"certwatch\" name=\"cache\" dev=dm-0 ino=218171 scontext=system_u:system_r:certwatch_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir type=SYSCALL msg=audit(1226270358.848:238): arch=40000003 syscall=39 success=no exit=-13 a0=39a2bf a1=3ff a2=3a0354 a3=94703c8 items=0 ppid=13344 pid=13349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"certwatch\" exe=\"/usr/bin/certwatch\" subj=system_u:system_r:certwatch_t:s0 key=(null)",
"~]# audit2allow -w -a type=AVC msg=audit(1226270358.848:238): avc: denied { write } for pid=13349 comm=\"certwatch\" name=\"cache\" dev=dm-0 ino=218171 scontext=system_u:system_r:certwatch_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access.",
"~]# audit2allow -a #============= certwatch_t ============== allow certwatch_t var_t:dir write;",
"~]# audit2allow -a -M mycertwatch ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i mycertwatch.pp ~]# ls mycertwatch.pp mycertwatch.te",
"~]# grep certwatch /var/log/audit/audit.log | audit2allow -M mycertwatch2 ******************** IMPORTANT *********************** To make this policy package active, execute: ~]# semodule -i mycertwatch2.pp"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow |
Chapter 2. cinder | Chapter 2. cinder The following chapter contains information about the configuration options in the cinder service. 2.1. cinder.conf This section contains options for the /etc/cinder/cinder.conf file. 2.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/cinder/cinder.conf file. . Configuration option = Default value Type Description acs5000_copy_interval = 5 integer value When volume copy task is going on,refresh volume status interval acs5000_volpool_name = ['pool01'] list value Comma separated list of storage system storage pools for volumes. allocated_capacity_weight_multiplier = -1.0 floating point value Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread. allow_availability_zone_fallback = False boolean value If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing. allow_compression_on_image_upload = False boolean value The strategy to use for image compression on upload. Default is disallow compression. allowed_direct_url_schemes = [] list value A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file, cinder]. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_rate_limit = True boolean value Enables or disables rate limit of the API. as13000_ipsan_pools = ['Pool0'] list value The Storage Pools Cinder should use, a comma separated list. as13000_meta_pool = None string value The pool which is used as a meta pool when creating a volume, and it should be a replication pool at present. If not set, the driver will choose a replication pool from the value of as13000_ipsan_pools. as13000_token_available_time = 3300 integer value The effective time of token validity in seconds. auth_strategy = keystone string value The strategy to use for auth. Supports noauth or keystone. az_cache_duration = 3600 integer value Cache volume availability zones in memory for the provided duration in seconds backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_stats_polling_interval = 60 integer value Time in seconds between requests for usage statistics from the backend. Be aware that generating usage statistics is expensive for some backends, so setting this value too low may adversely affect performance. backup_api_class = cinder.backup.api.API string value The full class name of the volume backup API class backup_ceph_chunk_size = 134217728 integer value The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. backup_ceph_conf = /etc/ceph/ceph.conf string value Ceph configuration file to use. backup_ceph_image_journals = False boolean value If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD objects to allow mirroring backup_ceph_pool = backups string value The Ceph pool where volume backups are stored. backup_ceph_stripe_count = 0 integer value RBD stripe count to use when creating a backup image. backup_ceph_stripe_unit = 0 integer value RBD stripe unit to use when creating a backup image. backup_ceph_user = cinder string value The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. backup_compression_algorithm = zlib string value Compression algorithm ("none" to disable) backup_container = None string value Custom directory to use for backups. backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver string value Driver to use for backups. backup_driver_init_check_interval = 60 integer value Time in seconds between checks to see if the backup driver has been successfully initialized, any time the driver is restarted. backup_driver_stats_polling_interval = 60 integer value Time in seconds between checks of the backup driver status. If does not report as working, it is restarted. backup_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. backup_file_size = 1999994880 integer value The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. backup_manager = cinder.backup.manager.BackupManager string value Full class name for the Manager for volume backup backup_max_operations = 15 integer value Maximum number of concurrent memory heavy operations: backup and restore. Value of 0 means unlimited backup_metadata_version = 2 integer value Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. backup_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. backup_mount_options = None string value Mount options passed to the NFS client. See NFS man page for details. backup_mount_point_base = USDstate_path/backup_mount string value Base dir containing mount point for NFS share. backup_name_template = backup-%s string value Template string to be used to generate backup names backup_native_threads_pool_size = 60 integer value Size of the native threads pool for the backups. Most backup drivers rely heavily on this, it can be decreased for specific drivers that don't. backup_object_number_per_notification = 10 integer value The number of chunks or objects, for which one Ceilometer notification will be sent backup_posix_path = USDstate_path/backup string value Path specifying where to store backups. backup_s3_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_s3_object_size has to be multiple of backup_s3_block_size. backup_s3_ca_cert_file = None string value path/to/cert/bundle.pem - A filename of the CA cert bundle to use. backup_s3_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the S3 backend storage. The default value is True to enable the timer. backup_s3_endpoint_url = None string value The url where the S3 server is listening. `backup_s3_http_proxy = ` string value Address or host for the http proxy server. `backup_s3_https_proxy = ` string value Address or host for the https proxy server. backup_s3_max_pool_connections = 10 integer value The maximum number of connections to keep in a connection pool. backup_s3_md5_validation = True boolean value Enable or Disable md5 validation in the s3 backend. backup_s3_object_size = 52428800 integer value The size in bytes of S3 backup objects backup_s3_retry_max_attempts = 4 integer value An integer representing the maximum number of retry attempts that will be made on a single request. backup_s3_retry_mode = legacy string value A string representing the type of retry mode. e.g: legacy, standard, adaptive backup_s3_sse_customer_algorithm = None string value The SSECustomerAlgorithm. backup_s3_sse_customer_key must be set at the same time to enable SSE. backup_s3_sse_customer_key = None string value The SSECustomerKey. backup_s3_sse_customer_algorithm must be set at the same time to enable SSE. backup_s3_store_access_key = None string value The S3 query token access key. backup_s3_store_bucket = volumebackups string value The S3 bucket to be used to store the Cinder backup data. backup_s3_store_secret_key = None string value The S3 query token secret key. backup_s3_timeout = 60 floating point value The time in seconds till a timeout exception is thrown. backup_s3_verify_ssl = True boolean value Enable or Disable ssl verify. backup_service_inithost_offload = True boolean value Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted. backup_sha_block_size_bytes = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. backup_share = None string value NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format. backup_swift_auth = per_user string value Swift authentication mechanism (per_user or single_user). backup_swift_auth_insecure = False boolean value Bypass verification of server certificate when making SSL connection to Swift. backup_swift_auth_url = None uri value The URL of the Keystone endpoint backup_swift_auth_version = 1 string value Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0 or "3" for auth 3.0 backup_swift_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size. backup_swift_ca_cert_file = None string value Location of the CA certificate file to use for swift client requests. backup_swift_container = volumebackups string value The default Swift container to use backup_swift_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer. backup_swift_key = None string value Swift key for authentication backup_swift_object_size = 52428800 integer value The size in bytes of Swift backup objects backup_swift_project = None string value Swift project/account name. Required when connecting to an auth 3.0 system backup_swift_project_domain = None string value Swift project domain name. Required when connecting to an auth 3.0 system backup_swift_retry_attempts = 3 integer value The number of retries to make for Swift operations backup_swift_retry_backoff = 2 integer value The backoff time in seconds between Swift retries backup_swift_tenant = None string value Swift tenant/account name. Required when connecting to an auth 2.0 system backup_swift_url = None uri value The URL of the Swift endpoint backup_swift_user = None string value Swift user name backup_swift_user_domain = None string value Swift user domain name. Required when connecting to an auth 3.0 system backup_timer_interval = 120 integer value Interval, in seconds, between two progress notifications reporting the backup status backup_use_same_host = False boolean value Backup services use same backend. backup_use_temp_snapshot = False boolean value If this is set to True, a temporary snapshot will be created for performing non-disruptive backups. Otherwise a temporary volume will be cloned in order to perform a backup. backup_workers = 1 integer value Number of backup processes to launch. Improves performance with concurrent backups. capacity_weight_multiplier = 1.0 floating point value Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread. `chap_password = ` string value Password for specified CHAP account name. chap_password_len = 12 integer value Length of the random string for CHAP password. `chap_username = ` string value CHAP user name. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_internal_tenant_project_id = None string value ID of the project which will be used as the Cinder internal tenant. cinder_internal_tenant_user_id = None string value ID of the user to be used in volume operations as the Cinder internal tenant. client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. clone_volume_timeout = 680 integer value Create clone volume timeout Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. cloned_volume_same_az = True boolean value Ensure that the new volumes are the same AZ as snapshot or source volume cluster = None string value Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. compression_format = gzip string value Image compression format on image upload compute_api_class = cinder.compute.nova.API string value The full class name of the compute API class to use config-dir = ['~/.project/project.conf.d/', '~/project.conf.d/', '/etc/project/project.conf.d/', '/etc/project.conf.d/'] list value Path to a config directory to pull *.conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via --config-file, arguments hence over-ridden options in the directory take precedence. This option must be set from the command-line. config-file = ['~/.project/project.conf', '~/project.conf', '/etc/project/project.conf', '/etc/project.conf'] unknown value Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to %(default)s. This option must be set from the command-line. config_source = [] list value Lists configuration groups that provide more details for accessing configuration settings from locations other than local files. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consistencygroup_api_class = cinder.consistencygroup.api.API string value The full class name of the consistencygroup API class control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. datera_503_interval = 5 integer value Interval between 503 retries datera_503_timeout = 120 integer value Timeout for HTTP 503 retry messages datera_api_port = 7717 string value Datera API port. datera_api_version = 2.2 string value Datera API version. datera_debug = False boolean value True to set function arg and return logging datera_debug_replica_count_override = False boolean value ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 datera_disable_extended_metadata = False boolean value Set to True to disable sending additional metadata to the Datera backend datera_disable_profiler = False boolean value Set to True to disable profiling in the Datera driver datera_disable_template_override = False boolean value Set to True to disable automatic template override of the size attribute when creating from a template datera_enable_image_cache = False boolean value Set to True to enable Datera backend image caching datera_image_cache_volume_type_id = None string value Cinder volume type id to use for cached volumes datera_ldap_server = None string value LDAP authentication server datera_tenant_id = None string value If set to Map --> OpenStack project ID will be mapped implicitly to Datera tenant ID If set to None --> Datera tenant ID will not be used during volume provisioning If set to anything else --> Datera tenant ID will be the provided value datera_volume_type_defaults = {} dict value Settings here will be used as volume-type defaults if the volume-type setting is not provided. This can be used, for example, to set a very low total_iops_max value if none is specified in the volume-type to prevent accidental overusage. Options are specified via the following format, WITHOUT ANY DF: PREFIX: datera_volume_type_defaults=iops_per_gb:100,bandwidth_per_gb:200... etc . db_driver = cinder.db string value Driver to use for database access debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zone = None string value Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes. default_group_type = None string value Default group type to use default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_sandstone_target_ips = [] list value SandStone default target ip. default_volume_type = __DEFAULT__ string value Default volume type to use driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. dsware_isthin = False boolean value The flag of thin storage allocation. Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. `dsware_manager = ` string value Fusionstorage manager ip addr for cinder-volume. Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. `dsware_rest_url = ` string value The address of FusionStorage array. For example, "dsware_rest_url=xxx" `dsware_storage_pools = ` string value The list of pools on the FusionStorage array, the semicolon(;) was used to split the storage pools, "dsware_storage_pools = xxx1; xxx2; xxx3" enable_force_upload = False boolean value Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it. enable_new_services = True boolean value Services to be added to the available pool on create enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enable_v2_api = True boolean value DEPRECATED: Deploy v2 of the Cinder API. enable_v3_api = True boolean value Deploy v3 of the Cinder API. enabled_backends = None list value A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. `fusionstorageagent = ` string value Fusionstorage agent ip addr range Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. glance_api_insecure = False boolean value Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed). glance_api_servers = None list value A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http. glance_api_ssl_compression = False boolean value Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2. glance_ca_certificates_file = None string value Location of ca certificates file to use for glance client requests. glance_catalog_info = image:glance:publicURL string value Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided. glance_certfile = None string value Location of certificate file to use for glance client requests. glance_core_properties = ['checksum', 'container_format', 'disk_format', 'image_name', 'image_id', 'min_disk', 'min_ram', 'name', 'size'] list value Default core properties of image glance_keyfile = None string value Location of certificate key file to use for glance client requests. glance_num_retries = 3 integer value Number retries when downloading an image from glance glance_request_timeout = None integer value http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used. glusterfs_backup_mount_point = USDstate_path/backup_mount string value Base dir containing mount point for gluster share. glusterfs_backup_share = None string value GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. group_api_class = cinder.group.api.API string value The full class name of the group API class hitachi_mirror_auth_password = None string value iSCSI authentication password hitachi_mirror_auth_user = None string value iSCSI authentication username hitachi_mirror_compute_target_ports = [] list value Target port names of compute node for host group or iSCSI target hitachi_mirror_ldev_range = None string value Logical device range of secondary storage system hitachi_mirror_pair_target_number = 0 integer value Pair target name of the host group or iSCSI target hitachi_mirror_pool = None string value Pool of secondary storage system hitachi_mirror_rest_api_ip = None string value IP address of REST API server hitachi_mirror_rest_api_port = 443 port value Port number of REST API server hitachi_mirror_rest_pair_target_ports = [] list value Target port names for pair of the host group or iSCSI target hitachi_mirror_rest_password = None string value Password of secondary storage system for REST API hitachi_mirror_rest_user = None string value Username of secondary storage system for REST API hitachi_mirror_snap_pool = None string value Thin pool of secondary storage system hitachi_mirror_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend hitachi_mirror_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. hitachi_mirror_storage_id = None string value ID of secondary storage system hitachi_mirror_target_ports = [] list value Target port names for host group or iSCSI target hitachi_mirror_use_chap_auth = False boolean value Whether or not to use iSCSI authentication hitachi_path_group_id = 0 integer value Path group ID assigned to the remote connection for remote replication hitachi_quorum_disk_id = None integer value ID of the Quorum disk used for global-active device hitachi_replication_copy_speed = 3 integer value Remote copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed. hitachi_replication_number = 0 integer value Instance number for REST API hitachi_replication_status_check_long_interval = 600 integer value Interval at which remote replication pair status is checked. This parameter is applied if the status has not changed to the expected status after the time indicated by this parameter has elapsed. hitachi_replication_status_check_short_interval = 5 integer value Initial interval at which remote replication pair status is checked hitachi_replication_status_check_timeout = 86400 integer value Maximum wait time before the remote replication pair status changes to the expected status hitachi_set_mirror_reserve_attribute = True boolean value Whether or not to set the mirror reserve attribute host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address. iet_conf = /etc/iet/ietd.conf string value DEPRECATED: IET configuration file image_compress_on_upload = True boolean value When possible, compress images uploaded to the image service image_conversion_address_space_limit = 1 integer value Address space limit in gigabytes to convert the image image_conversion_cpu_limit = 60 integer value CPU time limit in seconds to convert the image image_conversion_dir = USDstate_path/conversion string value Directory used for temporary storage during image conversion image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. infortrend_cli_cache = False boolean value The Infortrend CLI cache. While set True, the RAID status report will use cache stored in the CLI. Never enable this unless the RAID is managed only by Openstack and only by one infortrend cinder-volume backend. Otherwise, CLI might report out-dated status to cinder and thus there might be some race condition among all backend/CLIs. infortrend_cli_max_retries = 5 integer value The maximum retry times if a command fails. infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar string value The Infortrend CLI absolute path. infortrend_cli_timeout = 60 integer value The timeout for CLI in seconds. infortrend_iqn_prefix = iqn.2002-10.com.infortrend string value Infortrend iqn prefix for iSCSI. `infortrend_pools_name = ` list value The Infortrend logical volumes name list. It is separated with comma. `infortrend_slots_a_channels_id = ` list value Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. `infortrend_slots_b_channels_id = ` list value Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. init_host_max_objects_retrieval = 0 integer value Max number of volumes and snapshots to be retrieved per batch during volume manager host initialization. Query results will be obtained in batches from the database and not in one shot to avoid extreme memory usage. Set 0 to turn off this functionality. initiator_assign_sandstone_target_ip = {} dict value Support initiator assign target with assign ip. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. instorage_mcs_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create instorage_mcs_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) instorage_mcs_localcopy_rate = 50 integer value Specifies the InStorage LocalCopy copy rate to be used when creating a full volume copy. The default rate is 50, and the valid rates are 1-100. instorage_mcs_localcopy_timeout = 120 integer value Maximum number of seconds to wait for LocalCopy to be prepared. instorage_mcs_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) instorage_mcs_vol_compression = False boolean value Storage system compression option for volumes instorage_mcs_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (32/64/128/256) instorage_mcs_vol_intier = True boolean value Enable InTier for volumes instorage_mcs_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. instorage_mcs_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) instorage_mcs_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) instorage_mcs_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. instorage_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes java_path = /usr/bin/java string value The Java absolute path. jovian_block_size = 64K string value Block size can be: 32K, 64K, 128K, 256K, 512K, 1M jovian_ignore_tpath = [] list value List of multipath ip addresses to ignore. jovian_pool = Pool-0 string value JovianDSS pool that holds all cinder volumes jovian_recovery_delay = 60 integer value Time before HA cluster failure. keystone_catalog_info = identity:Identity Service:publicURL string value Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset kioxia_block_size = 4096 integer value Volume block size in bytes - 512 or 4096 (Default). kioxia_cafile = None string value Cert for provisioner REST API SSL kioxia_desired_bw_per_gb = 0 integer value Desired bandwidth in B/s per GB. kioxia_desired_iops_per_gb = 0 integer value Desired IOPS/GB. kioxia_max_bw_per_gb = 0 integer value Upper limit for bandwidth in B/s per GB. kioxia_max_iops_per_gb = 0 integer value Upper limit for IOPS/GB. kioxia_max_replica_down_time = 0 integer value Replicated volume max downtime for replica in minutes. kioxia_num_replicas = 1 integer value Number of volume replicas. kioxia_provisioning_type = THICK string value Thin or thick volume, Default thick. kioxia_same_rack_allowed = False boolean value Can more than one replica be allocated to same rack. kioxia_snap_reserved_space_percentage = 0 integer value Percentage of the parent volume to be used for log. kioxia_snap_vol_reserved_space_percentage = 0 integer value Writable snapshot percentage of parent volume used for log. kioxia_snap_vol_span_allowed = True boolean value Allow span in snapshot volume - Default True. kioxia_span_allowed = True boolean value Allow span - Default True. kioxia_token = None string value KumoScale Provisioner auth token. kioxia_url = None string value KumoScale provisioner REST API URL kioxia_vol_reserved_space_percentage = 0 integer value Thin volume reserved capacity allocation percentage. kioxia_writable = False boolean value Volumes from snapshot writeable or not. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter manager_ips = {} dict value This option is to support the FSA to mount across the different nodes. The parameters takes the standard dict config form, manager_ips = host1:ip1, host2:ip2... max_age = 0 integer value Number of seconds between subsequent usage refreshes max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. message_reap_interval = 86400 integer value interval between periodic task runs to clean expired messages in seconds. message_ttl = 2592000 integer value message minimum life in seconds. migration_create_volume_timeout_secs = 300 integer value Timeout for creating the volume to migrate to when performing volume migration (seconds) monkey_patch = False boolean value Enable monkey patching monkey_patch_modules = [] list value List of modules/decorators to monkey patch my_ip = <based on operating system> host address value IP address of this host no_snapshot_gb_quota = False boolean value Whether snapshots count against gigabyte quota num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmeof_conn_info_version = 1 integer value NVMe os-brick connector has 2 different connection info formats, this allows some NVMe-oF drivers that use the original format (version 1), such as spdk and LVM-nvmet, to send the newer format. nvmet_ns_id = 10 integer value Namespace id for the subsystem for the LVM volume when not sharing targets. The minimum id value when sharing.Maximum supported value in Linux is 8192 nvmet_port_id = 1 port value The id of the NVMe target port definition when not sharing targets. The starting port id value when sharing, incremented for each secondary ip address. osapi_max_limit = 1000 integer value The maximum number of items that a collection resource returns in a single response osapi_volume_ext_list = [] list value Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] multi valued osapi volume extension to load osapi_volume_listen = 0.0.0.0 string value IP address on which OpenStack Volume API listens osapi_volume_listen_port = 8776 port value Port on which OpenStack Volume API listens osapi_volume_use_ssl = False boolean value Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. osapi_volume_workers = None integer value Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available. per_volume_size_limit = -1 integer value Max size allowed per volume, in gigabytes periodic_fuzzy_delay = 60 integer value Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 60 integer value Interval, in seconds, between running periodic tasks pool_id_filter = [] list value Pool id permit to use Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. pool_type = default string value Pool type, like sata-2copy Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. public_endpoint = None string value Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL. publish_errors = False boolean value Enables or disables publication of error events. quota_backup_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for backups per project quota_backups = 10 integer value Number of volume backups allowed per project quota_consistencygroups = 10 integer value Number of consistencygroups allowed per project quota_driver = cinder.quota.DbQuotaDriver string value Default driver to use for quota checks quota_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for volumes and snapshots per project quota_groups = 10 integer value Number of groups allowed per project quota_snapshots = 10 integer value Number of volume snapshots allowed per project quota_volumes = 10 integer value Number of volumes allowed per project rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. reinit_driver_count = 3 integer value Maximum times to reintialize the driver if volume initialization fails. The interval of retry is exponentially backoff, and will be 1s, 2s, 4s etc. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_interval = 10 integer value Interval, in seconds, between nodes reporting state to datastore reservation_clean_interval = USDreservation_expire integer value Interval between periodic task runs to clean expired reservations in seconds. reservation_expire = 86400 integer value Number of seconds until a reservation expires reserved_percentage = 0 integer value The percentage of backend capacity is reserved resource_query_filters_file = /etc/cinder/resource_filters.json string value Json file indicating user visible filter parameters for list queries. restore_discard_excess_bytes = True boolean value If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. rootwrap_config = /etc/cinder/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? `san_hosts = ` list value IP address of Open-E JovianDSS SA `sandstone_pool = ` string value SandStone storage pool resource name. scheduler_default_filters = ['AvailabilityZoneFilter', 'CapacityFilter', 'CapabilitiesFilter'] list value Which filter class names to use for filtering hosts when not specified in the request. scheduler_default_weighers = ['CapacityWeigher'] list value Which weigher class names to use for weighing hosts. scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler string value Default scheduler driver to use scheduler_driver_init_wait_time = 60 integer value Maximum time in seconds to wait for the driver to report as ready scheduler_host_manager = cinder.scheduler.host_manager.HostManager string value The scheduler host manager class to use `scheduler_json_config_location = ` string value Absolute path to scheduler configuration JSON file. scheduler_manager = cinder.scheduler.manager.SchedulerManager string value Full class name for the Manager for scheduler scheduler_max_attempts = 3 integer value Maximum number of attempts to schedule a volume scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler string value Which handler to use for selecting the host/pool after weighing scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. service_down_time = 60 integer value Maximum time since last check-in for a service to be considered up snapshot_name_template = snapshot-%s string value Template string to be used to generate snapshot names snapshot_same_host = True boolean value Create volume from snapshot at the host where snapshot resides split_loggers = False boolean value Log requests to multiple loggers. ssh_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=USDstate_path/ssh_known_hosts state_path = /var/lib/cinder string value Top-level directory for maintaining cinder's state storage_availability_zone = nova string value Availability zone of this node. Can be overridden per volume backend with the option "backend_availability_zone". storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. strict_ssh_host_key_policy = False boolean value Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False swift_catalog_info = object-store:swift:publicURL string value Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. Note: The IET driver is deprecated and will be removed in the V release. target_ip_address = USDmy_ip string value The IP address that the iSCSI/NVMEoF daemon is listening on target_port = 3260 port value The port that the iSCSI/NVMEoF daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI/NVMEoF volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma" or "nvmet_tcp". target_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI/NVMEoF daemon tcp_keepalive = True boolean value Sets the value of TCP_KEEPALIVE (True/False) for each server socket. tcp_keepalive_count = None integer value Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. tcp_keepalive_interval = None integer value Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. transfer_api_class = cinder.transfer.api.API string value The full class name of the volume transfer API class transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html until_refresh = 0 integer value Count of reservations until usage is refreshed use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_default_quota_class = True boolean value Enables or disables use of default quota class with default quota. use_eventlog = False boolean value Log output to Windows Event Log. use_forwarded_for = False boolean value Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. verify_glance_signatures = enabled string value Enable image signature verification. Cinder uses the image signature metadata from Glance and verifies the signature of a signed image while downloading that image. There are two options here. enabled : verify when image has signature metadata. disabled : verification is turned off. If the image signature cannot be verified or if the image signature metadata is incomplete when required, then Cinder will not create the volume and update it into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create volumes. vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing the VMDK createType subformats that are allowed. We recommend that you only include single-file-with-sparse-header variants to avoid potential host file exposure when processing named extents when an image is converted to raw format as it is written to a volume. If this list is empty, no VMDK images are allowed. volume_api_class = cinder.volume.api.API string value The full class name of the volume API class to use volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_manager = cinder.volume.manager.VolumeManager string value Full class name for the Manager for volume volume_name_template = volume-%s string value Template string to be used to generate volume names volume_number_multiplier = -1.0 floating point value Multiplier used for weighing volume number. Negative numbers mean to spread vs stack. volume_service_inithost_offload = False boolean value Offload pending volume delete during volume service startup volume_transfer_key_length = 16 integer value The number of characters in the autogenerated auth key. volume_transfer_salt_length = 8 integer value The number of characters in the salt. volume_usage_audit_period = month string value Time period for which to generate volume usages. The options are hour, day, month, or year. volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vrts_lun_sparse = True boolean value Create sparse Lun. vrts_target_config = /etc/cinder/vrts_target.xml string value VA config file. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. wsgi_server_debug = False boolean value True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. zoning_mode = None string value FC Zoning mode configured, only fabric is supported now. 2.1.2. backend The following table outlines the options available under the [backend] group in the /etc/cinder/cinder.conf file. Table 2.1. backend Configuration option = Default value Type Description backend_host = None string value Backend override of host value. 2.1.3. backend_defaults The following table outlines the options available under the [backend_defaults] group in the /etc/cinder/cinder.conf file. Table 2.2. backend_defaults Configuration option = Default value Type Description auto_calc_max_oversubscription_ratio = False boolean value K2 driver will calculate max_oversubscription_ratio on setting this option as True. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_native_threads_pool_size = 20 integer value Size of the native threads pool for the backend. Increase for backends that heavily rely on this, like the RBD driver. chap = disabled string value CHAP authentication mode, effective only for iscsi (disabled|enabled) `chap_password = ` string value Password for specified CHAP account name. `chap_username = ` string value CHAP user name. check_max_pool_luns_threshold = False boolean value DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml string value Config file for cinder eternus_dx volume driver. cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml string value The configuration file for the Cinder Huawei driver. connection_type = iscsi string value Connection type to the IBM Storage Array cycle_period_seconds = 300 integer value This defines an optional cycle period that applies to Global Mirror relationships with a cycling mode of multi. A Global Mirror relationship using the multi cycling_mode performs a complete cycle at most once each period. The default is 300 seconds, and the valid seconds are 60-86400. datacore_api_timeout = 300 integer value Seconds to wait for a response from a DataCore API call. datacore_disk_failed_delay = 300 integer value Seconds to wait for DataCore virtual disk to come out of the "Failed" state. datacore_disk_pools = [] list value List of DataCore disk pools that can be used by volume driver. datacore_disk_type = single string value DataCore virtual disk type (single/mirrored). Mirrored virtual disks require two storage servers in the server group. datacore_fc_unallowed_targets = [] list value List of FC targets that cannot be used to attach volume. To prevent the DataCore FibreChannel volume driver from using some front-end targets in volume attachment, specify this option and list the iqn and target machine for each target as the value, such as <wwpns:target name>, <wwpns:target name>, <wwpns:target name>. datacore_iscsi_chap_storage = USDstate_path/.datacore_chap string value Fully qualified file name where dynamically generated iSCSI CHAP secrets are stored. datacore_iscsi_unallowed_targets = [] list value List of iSCSI targets that cannot be used to attach volume. To prevent the DataCore iSCSI volume driver from using some front-end targets in volume attachment, specify this option and list the iqn and target machine for each target as the value, such as <iqn:target name>, <iqn:target name>, <iqn:target name>. datacore_storage_profile = None string value DataCore virtual disk storage profile. default_timeout = 31536000 integer value Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long. deferred_deletion_delay = 0 integer value Time delay in seconds before a volume is eligible for permanent removal after being tagged for deferred deletion. deferred_deletion_purge_interval = 60 integer value Number of seconds between runs of the periodic task to purge volumes tagged for deletion. dell_api_async_rest_timeout = 15 integer value Dell SC API async call default timeout in seconds. dell_api_sync_rest_timeout = 30 integer value Dell SC API sync call default timeout in seconds. dell_sc_api_port = 3033 port value Dell API port dell_sc_server_folder = openstack string value Name of the server folder to use on the Storage Center dell_sc_ssn = 64702 integer value Storage Center System Serial Number dell_sc_verify_cert = False boolean value Enable HTTPS SC certificate verification dell_sc_volume_folder = openstack string value Name of the volume folder to use on the Storage Center dell_server_os = Red Hat Linux 6.x string value Server OS type to use when creating a new server on the Storage Center. destroy_empty_storage_group = False boolean value To destroy storage group when the last LUN is removed from it. By default, the value is False. disable_discovery = False boolean value Disabling iSCSI discovery (sendtargets) for multipath connections on K2 driver. `dpl_pool = ` string value DPL pool uuid in which DPL volumes are stored. dpl_port = 8357 port value DPL port number. driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. `ds8k_devadd_unitadd_mapping = ` string value Mapping between IODevice address and unit address. ds8k_host_type = auto string value Set to zLinux if your OpenStack version is prior to Liberty and you're connecting to zLinux systems. Otherwise set to auto. Valid values for this parameter are: auto , AMDLinuxRHEL , AMDLinuxSuse , AppleOSX , Fujitsu , Hp , HpTru64 , HpVms , LinuxDT , LinuxRF , LinuxRHEL , LinuxSuse , Novell , SGI , SVC , SanFsAIX , SanFsLinux , Sun , VMWare , Win2000 , Win2003 , Win2008 , Win2012 , iLinux , nSeries , pLinux , pSeries , pSeriesPowerswap , zLinux , iSeries . ds8k_ssid_prefix = FF string value Set the first two digits of SSID. enable_deferred_deletion = False boolean value Enable deferred deletion. Upon deletion, volumes are tagged for deletion but will only be removed asynchronously at a later time. enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. excluded_domain_ip = None IP address value DEPRECATED: Fault Domain IP to be excluded from iSCSI returns. Deprecated since: Stein *Reason:*Replaced by excluded_domain_ips option excluded_domain_ips = [] list value Comma separated Fault Domain IPs to be excluded from iSCSI returns. expiry_thres_minutes = 720 integer value This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. extra_capabilities = {} string value User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. flashsystem_connection_protocol = FC string value Connection protocol should be FC. (Default is FC.) flashsystem_iscsi_portid = 0 integer value Default iSCSI Port ID of FlashSystem. (Default port is 0.) flashsystem_multihostmap_enabled = True boolean value Allows vdisk to multi host mapping. (Default is True) force_delete_lun_in_storagegroup = True boolean value Delete a LUN even if it is in Storage Groups. goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. gpfs_hosts = [] list value Comma-separated list of IP address or hostnames of GPFS nodes. gpfs_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the gpfs nodes with which driver needs to communicate. Default=USDstate_path/ssh_known_hosts gpfs_images_dir = None string value Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. gpfs_images_share_mode = None string value Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. gpfs_max_clone_depth = 0 integer value Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. gpfs_mount_point_base = None string value Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. `gpfs_private_key = ` string value Filename of private key to use for SSH authentication. gpfs_sparse_volumes = True boolean value Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. gpfs_ssh_port = 22 port value SSH port to use. gpfs_storage_pool = system string value Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. gpfs_strict_host_key_policy = False boolean value Option to enable strict gpfs host key checking while connecting to gpfs nodes. Default=False gpfs_user_login = root string value Username for GPFS nodes. `gpfs_user_password = ` string value Password for GPFS node user. hitachi_async_copy_check_interval = 10 integer value Interval in seconds to check asynchronous copying status during a copy pair deletion or data restoration. hitachi_compute_target_ports = [] list value IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hitachi_copy_check_interval = 3 integer value Interval in seconds to check copying status during a volume copy. hitachi_copy_speed = 3 integer value Copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed. hitachi_discard_zero_page = True boolean value Enable or disable zero page reclamation in a DP-VOL. hitachi_exec_retry_interval = 5 integer value Retry interval in seconds for REST API execution. hitachi_extend_timeout = 600 integer value Maximum wait time in seconds for a volume extention to complete. hitachi_group_create = False boolean value If True, the driver will create host groups or iSCSI targets on storage ports as needed. hitachi_group_delete = False boolean value If True, the driver will delete host groups or iSCSI targets on storage ports as needed. hitachi_group_name_format = None string value Format of host groups, iSCSI targets, and server objects. hitachi_host_mode_options = [] list value Host mode option for host group or iSCSI target. hitachi_ldev_range = None string value Range of the LDEV numbers in the format of xxxx-yyyy that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8). hitachi_lock_timeout = 7200 integer value Maximum wait time in seconds for storage to be logined or unlocked. hitachi_lun_retry_interval = 1 integer value Retry interval in seconds for REST API adding a LUN mapping to the server. hitachi_lun_timeout = 50 integer value Maximum wait time in seconds for adding a LUN mapping to the server. hitachi_pair_target_number = 0 integer value Pair target name of the host group or iSCSI target hitachi_pools = [] list value Pool number[s] or pool name[s] of the DP pool. hitachi_port_scheduler = False boolean value Enable port scheduling of WWNs to the configured ports so that WWNs are registered to ports in a round-robin fashion. hitachi_rest_another_ldev_mapped_retry_timeout = 600 integer value Retry time in seconds when new LUN allocation request fails. hitachi_rest_connect_timeout = 30 integer value Maximum wait time in seconds for connecting to REST API session. hitachi_rest_disable_io_wait = True boolean value This option will allow detaching volume immediately. If set False, storage may take few minutes to detach volume after I/O. hitachi_rest_get_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response against sync methods, for example GET hitachi_rest_job_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response against async methods from REST API, for example PUT and DELETE. hitachi_rest_keep_session_loop_interval = 180 integer value Loop interval in seconds for keeping REST API session. hitachi_rest_pair_target_ports = [] list value Target port names for pair of the host group or iSCSI target hitachi_rest_server_busy_timeout = 7200 integer value Maximum wait time in seconds when REST API returns busy. hitachi_rest_tcp_keepalive = True boolean value Enables or disables use of REST API tcp keepalive hitachi_rest_tcp_keepcnt = 4 integer value Maximum number of transmissions for TCP keepalive packet. hitachi_rest_tcp_keepidle = 60 integer value Wait time in seconds for sending a first TCP keepalive packet. hitachi_rest_tcp_keepintvl = 15 integer value Interval of transmissions in seconds for TCP keepalive packet. hitachi_rest_timeout = 30 integer value Maximum wait time in seconds for each REST API request. hitachi_restore_timeout = 86400 integer value Maximum wait time in seconds for the restore operation to complete. hitachi_snap_pool = None string value Pool number or pool name of the snapshot pool. hitachi_state_transition_timeout = 900 integer value Maximum wait time in seconds for a volume transition to complete. hitachi_storage_id = None string value Product number of the storage system. hitachi_target_ports = [] list value IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hitachi_zoning_request = False boolean value If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled. `hpe3par_api_url = ` string value WSAPI Server URL. This setting applies to both 3PAR and Primera. Example 1: for 3PAR, URL is: https://<3par ip>:8080/api/v1 Example 2: for Primera, URL is: https://<primera ip>:443/api/v1 hpe3par_cpg = ['OpenStack'] list value List of the 3PAR / Primera CPG(s) to use for volume creation `hpe3par_cpg_snap = ` string value The 3PAR / Primera CPG to use for snapshots of volumes. If empty the userCPG will be used. hpe3par_debug = False boolean value Enable HTTP debugging to 3PAR / Primera hpe3par_iscsi_chap_enabled = False boolean value Enable CHAP authentication for iSCSI connections. hpe3par_iscsi_ips = [] list value List of target iSCSI addresses to use. `hpe3par_password = ` string value 3PAR / Primera password for the user specified in hpe3par_username `hpe3par_snapshot_expiration = ` string value The time in hours when a snapshot expires and is deleted. This must be larger than expiration `hpe3par_snapshot_retention = ` string value The time in hours to retain a snapshot. You can't delete it before this expires. `hpe3par_target_nsp = ` string value The nsp of 3PAR backend to be used when: (1) multipath is not enabled in cinder.conf. (2) Fiber Channel Zone Manager is not used. (3) the 3PAR backend is prezoned with this specific nsp only. For example if nsp is 2 1 2, the format of the option's value is 2:1:2 `hpe3par_username = ` string value 3PAR / Primera username with the edit role hpexp_async_copy_check_interval = 10 integer value Interval in seconds to check copy asynchronously hpexp_compute_target_ports = [] list value IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hpexp_copy_check_interval = 3 integer value Interval in seconds to check copy hpexp_copy_speed = 3 integer value Copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed. hpexp_discard_zero_page = True boolean value Enable or disable zero page reclamation in a THP V-VOL. hpexp_exec_retry_interval = 5 integer value Retry interval in seconds for REST API execution. hpexp_extend_timeout = 600 integer value Maximum wait time in seconds for a volume extention to complete. hpexp_group_create = False boolean value If True, the driver will create host groups or iSCSI targets on storage ports as needed. hpexp_group_delete = False boolean value If True, the driver will delete host groups or iSCSI targets on storage ports as needed. hpexp_host_mode_options = [] list value Host mode option for host group or iSCSI target. hpexp_ldev_range = None string value Range of the LDEV numbers in the format of xxxx-yyyy that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8). hpexp_lock_timeout = 7200 integer value Maximum wait time in seconds for storage to be unlocked. hpexp_lun_retry_interval = 1 integer value Retry interval in seconds for REST API adding a LUN. hpexp_lun_timeout = 50 integer value Maximum wait time in seconds for adding a LUN to complete. hpexp_pools = [] list value Pool number[s] or pool name[s] of the THP pool. hpexp_rest_another_ldev_mapped_retry_timeout = 600 integer value Retry time in seconds when new LUN allocation request fails. hpexp_rest_connect_timeout = 30 integer value Maximum wait time in seconds for REST API connection to complete. hpexp_rest_disable_io_wait = True boolean value It may take some time to detach volume after I/O. This option will allow detaching volume to complete immediately. hpexp_rest_get_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response against GET method of REST API. hpexp_rest_job_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response from REST API. hpexp_rest_keep_session_loop_interval = 180 integer value Loop interval in seconds for keeping REST API session. hpexp_rest_server_busy_timeout = 7200 integer value Maximum wait time in seconds when REST API returns busy. hpexp_rest_tcp_keepalive = True boolean value Enables or disables use of REST API tcp keepalive hpexp_rest_tcp_keepcnt = 4 integer value Maximum number of transmissions for TCP keepalive packet. hpexp_rest_tcp_keepidle = 60 integer value Wait time in seconds for sending a first TCP keepalive packet. hpexp_rest_tcp_keepintvl = 15 integer value Interval of transmissions in seconds for TCP keepalive packet. hpexp_rest_timeout = 30 integer value Maximum wait time in seconds for REST API execution to complete. hpexp_restore_timeout = 86400 integer value Maximum wait time in seconds for the restore operation to complete. hpexp_snap_pool = None string value Pool number or pool name of the snapshot pool. hpexp_state_transition_timeout = 900 integer value Maximum wait time in seconds for a volume transition to complete. hpexp_storage_id = None string value Product number of the storage system. hpexp_target_ports = [] list value IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hpexp_zoning_request = False boolean value If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled. hpmsa_api_protocol = https string value HPMSA API interface protocol. hpmsa_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. hpmsa_pool_name = A string value Pool or Vdisk name to use for volume creation. hpmsa_pool_type = virtual string value linear (for Vdisk) or virtual (for Pool). hpmsa_verify_certificate = False boolean value Whether to verify HPMSA array SSL certificate. hpmsa_verify_certificate_path = None string value HPMSA array SSL certificate path. hypermetro_devices = None string value The remote device hypermetro will use. iet_conf = /etc/iet/ietd.conf string value DEPRECATED: IET configuration file ignore_pool_full_threshold = False boolean value Force LUN creation even if the full threshold of pool is reached. By default, the value is False. image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. included_domain_ips = [] list value Comma separated Fault Domain IPs to be included from iSCSI returns. infinidat_iscsi_netspaces = [] list value List of names of network spaces to use for iSCSI connectivity infinidat_pool_name = None string value Name of the pool from which volumes are allocated infinidat_storage_protocol = fc string value Protocol for transferring data between host and storage back-end. infinidat_use_compression = False boolean value Specifies whether to turn on compression for newly created volumes. initiator_auto_deregistration = False boolean value Automatically deregister initiators after the related storage group is destroyed. By default, the value is False. initiator_auto_registration = False boolean value Automatically register initiators. By default, the value is False. initiator_check = False boolean value Use this value to enable the initiator_check. interval = 3 integer value Use this value to specify length of the interval in seconds. io_port_list = None list value Comma separated iSCSI or FC ports to be used in Nova or Cinder. iscsi_initiators = None string value Mapping between hostname and its iSCSI initiator IP addresses. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes lenovo_api_protocol = https string value Lenovo api interface protocol. lenovo_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. lenovo_pool_name = A string value Pool or Vdisk name to use for volume creation. lenovo_pool_type = virtual string value linear (for VDisk) or virtual (for Pool). lenovo_verify_certificate = False boolean value Whether to verify Lenovo array SSL certificate. lenovo_verify_certificate_path = None string value Lenovo array SSL certificate path. linstor_autoplace_count = 0 integer value Autoplace replication count on volume deployment. 0 = Full cluster replication without autoplace, 1 = Single node deployment without replication, 2 or greater = Replicated deployment with autoplace. linstor_controller_diskless = True boolean value True means Cinder node is a diskless LINSTOR node. linstor_default_blocksize = 4096 integer value Default Block size for Image restoration. When using iSCSI transport, this option specifies the block size. linstor_default_storage_pool_name = DfltStorPool string value Default Storage Pool name for LINSTOR. linstor_default_uri = linstor://localhost string value Default storage URI for LINSTOR. linstor_default_volume_group_name = drbd-vg string value Default Volume Group name for LINSTOR. Not Cinder Volume. linstor_volume_downsize_factor = 4096 floating point value Default volume downscale size in KiB = 4 MiB. load_balance = False boolean value Enable/disable load balancing for a PowerMax backend. load_balance_real_time = False boolean value Enable/disable real-time performance metrics for Port level load balancing for a PowerMax backend. load_data_format = Avg string value Performance data format, not applicable for real-time metrics. Available options are "avg" and "max". load_look_back = 60 integer value How far in minutes to look back for diagnostic performance metrics in load calculation, minimum of 0 maximum of 1440 (24 hours). load_look_back_real_time = 1 integer value How far in minutes to look back for real-time performance metrics in load calculation, minimum of 1 maximum of 10. `lss_range_for_cg = ` string value Reserve LSSs for consistency group. lvm_conf_file = /etc/cinder/lvm.conf string value LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify None to not use a conf file even if one exists). lvm_mirrors = 0 integer value If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space lvm_share_target = False boolean value Whether to share the same target for all LUNs or not (currently only supported by nvmet. lvm_suppress_fd_warnings = False boolean value Suppress leaked file descriptor warnings in LVM commands. lvm_type = auto string value Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. macrosan_client = None list value Macrosan iscsi_clients list. You can configure multiple clients. You can configure it in this format: (host; client_name; sp1_iscsi_port; sp2_iscsi_port), (host; client_name; sp1_iscsi_port; sp2_iscsi_port) Important warning, Client_name has the following requirements: [a-zA-Z0-9.-_:], the maximum number of characters is 31 E.g: (controller1; device1; eth-1:0; eth-2:0), (controller2; device2; eth-1:0/eth-1:1; eth-2:0/eth-2:1), macrosan_client_default = None string value This is the default connection ports' name for iscsi. This default configuration is used when no host related information is obtained.E.g: eth-1:0/eth-1:1; eth-2:0/eth-2:1 macrosan_fc_keep_mapped_ports = True boolean value In the case of an FC connection, the configuration item associated with the port is maintained. macrosan_fc_use_sp_port_nr = 1 integer value The use_sp_port_nr parameter is the number of online FC ports used by the single-ended memory when the FC connection is established in the switch non-all-pass mode. The maximum is 4 macrosan_force_unmap_itl = True boolean value Force disconnect while deleting volume macrosan_log_timing = True boolean value Whether enable log timing macrosan_pool = None string value Pool to use for volume creation macrosan_replication_destination_ports = None list value Slave device macrosan_replication_ipaddrs = None list value MacroSAN replication devices' ip addresses macrosan_replication_password = None string value MacroSAN replication devices' password macrosan_replication_username = None string value MacroSAN replication devices' username macrosan_sdas_ipaddrs = None list value MacroSAN sdas devices' ip addresses macrosan_sdas_password = None string value MacroSAN sdas devices' password macrosan_sdas_username = None string value MacroSAN sdas devices' username macrosan_snapshot_resource_ratio = 1.0 floating point value Set snapshot's resource ratio macrosan_thin_lun_extent_size = 8 integer value Set the thin lun's extent size macrosan_thin_lun_high_watermark = 20 integer value Set the thin lun's high watermark macrosan_thin_lun_low_watermark = 5 integer value Set the thin lun's low watermark `management_ips = ` string value List of Management IP addresses (separated by commas) max_luns_per_storage_group = 255 integer value Default max number of LUNs in a storage group. By default, the value is 255. max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. metro_domain_name = None string value The remote metro device domain name. metro_san_address = None string value The remote metro device request url. metro_san_password = None string value The remote metro device san password. metro_san_user = None string value The remote metro device san user. metro_storage_pools = None string value The remote metro device pool names. `nas_host = ` string value IP address or Hostname of NAS system. nas_login = admin string value User name to connect to NAS system. nas_mount_options = None string value Options used to mount the storage backend file system where Cinder volumes are stored. `nas_password = ` string value Password to connect to NAS system. `nas_private_key = ` string value Filename of private key to use for SSH authentication. nas_secure_file_operations = auto string value Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. nas_secure_file_permissions = auto string value Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. `nas_share_path = ` string value Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 . nas_ssh_port = 22 port value SSH port to use to connect to NAS system. nas_volume_prov_type = thin string value Provisioning type that will be used when creating volumes. naviseccli_path = None string value Naviseccli Path. nec_v_async_copy_check_interval = 10 integer value Interval in seconds to check asynchronous copying status during a copy pair deletion or data restoration. nec_v_compute_target_ports = [] list value IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). nec_v_copy_check_interval = 3 integer value Interval in seconds to check copying status during a volume copy. nec_v_copy_speed = 3 integer value Copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed. nec_v_discard_zero_page = True boolean value Enable or disable zero page reclamation in a DP-VOL. nec_v_exec_retry_interval = 5 integer value Retry interval in seconds for REST API execution. nec_v_extend_timeout = 600 integer value Maximum wait time in seconds for a volume extention to complete. nec_v_group_create = False boolean value If True, the driver will create host groups or iSCSI targets on storage ports as needed. nec_v_group_delete = False boolean value If True, the driver will delete host groups or iSCSI targets on storage ports as needed. nec_v_host_mode_options = [] list value Host mode option for host group or iSCSI target nec_v_ldev_range = None string value Range of the LDEV numbers in the format of xxxx-yyyy that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8). nec_v_lock_timeout = 7200 integer value Maximum wait time in seconds for storage to be unlocked. nec_v_lun_retry_interval = 1 integer value Retry interval in seconds for REST API adding a LUN. nec_v_lun_timeout = 50 integer value Maximum wait time in seconds for adding a LUN to complete. nec_v_pools = [] list value Pool number[s] or pool name[s] of the DP pool. nec_v_rest_another_ldev_mapped_retry_timeout = 600 integer value Retry time in seconds when new LUN allocation request fails. nec_v_rest_connect_timeout = 30 integer value Maximum wait time in seconds for REST API connection to complete. nec_v_rest_disable_io_wait = True boolean value It may take some time to detach volume after I/O. This option will allow detaching volume to complete immediately. nec_v_rest_get_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response against GET method of REST API. nec_v_rest_job_api_response_timeout = 1800 integer value Maximum wait time in seconds for a response from REST API. nec_v_rest_keep_session_loop_interval = 180 integer value Loop interval in seconds for keeping REST API session. nec_v_rest_server_busy_timeout = 7200 integer value Maximum wait time in seconds when REST API returns busy. nec_v_rest_tcp_keepalive = True boolean value Enables or disables use of REST API tcp keepalive nec_v_rest_tcp_keepcnt = 4 integer value Maximum number of transmissions for TCP keepalive packet. nec_v_rest_tcp_keepidle = 60 integer value Wait time in seconds for sending a first TCP keepalive packet. nec_v_rest_tcp_keepintvl = 15 integer value Interval of transmissions in seconds for TCP keepalive packet. nec_v_rest_timeout = 30 integer value Maximum wait time in seconds for REST API execution to complete. nec_v_restore_timeout = 86400 integer value Maximum wait time in seconds for the restore operation to complete. nec_v_snap_pool = None string value Pool number or pool name of the snapshot pool. nec_v_state_transition_timeout = 900 integer value Maximum wait time in seconds for a volume transition to complete. nec_v_storage_id = None string value Product number of the storage system. nec_v_target_ports = [] list value IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). nec_v_zoning_request = False boolean value If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled. netapp_api_trace_pattern = (.*) string value A regular expression to limit the API tracing. This option is honored only if enabling api tracing with the trace_flags option. By default, all APIs will be traced. netapp_copyoffload_tool_path = None string value This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. netapp_host_type = None string value This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. netapp_login = None string value Administrative user account name used to access the storage system or proxy server. netapp_lun_ostype = None string value This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. netapp_lun_space_reservation = enabled string value This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. netapp_nfs_image_cache_cleanup_interval = 600 integer value Sets time in seconds between NFS image cache cleanup tasks. netapp_password = None string value Password for the administrative user account specified in the netapp_login option. netapp_pool_name_search_pattern = (.+) string value This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. netapp_replication_aggregate_map = None dict value Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol/FlexGroup), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... netapp_replication_volume_online_timeout = 360 integer value Sets time in seconds to wait for a replication volume create to complete and go online. netapp_server_hostname = None string value The hostname (or IP address) for the storage system or proxy server. netapp_server_port = None integer value The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. netapp_size_multiplier = 1.2 floating point value The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. netapp_snapmirror_quiesce_timeout = 3600 integer value The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. netapp_storage_family = ontap_cluster string value The storage family type used on the storage system; the only valid value is ontap_cluster for using clustered Data ONTAP. netapp_storage_protocol = None string value The storage protocol to be used on the data path with the storage system. netapp_transport_type = http string value The transport protocol used when communicating with the storage system or proxy server. netapp_vserver = None string value This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. nexenta_blocksize = 4096 integer value Block size for datasets nexenta_chunksize = 32768 integer value NexentaEdge iSCSI LUN object chunk size `nexenta_client_address = ` string value NexentaEdge iSCSI Gateway client address for non-VIP service nexenta_dataset_compression = on string value Compression value for new ZFS folders. nexenta_dataset_dedup = off string value Deduplication value for new ZFS folders. `nexenta_dataset_description = ` string value Human-readable description for the folder. nexenta_encryption = False boolean value Defines whether NexentaEdge iSCSI LUN object has encryption enabled. `nexenta_folder = ` string value A folder where cinder created datasets will reside. nexenta_group_snapshot_template = group-snapshot-%s string value Template string to generate group snapshot name `nexenta_host = ` string value IP address of NexentaStor Appliance nexenta_host_group_prefix = cinder string value Prefix for iSCSI host groups on NexentaStor nexenta_iops_limit = 0 integer value NexentaEdge iSCSI LUN object IOPS limit `nexenta_iscsi_service = ` string value NexentaEdge iSCSI service name nexenta_iscsi_target_host_group = all string value Group of hosts which are allowed to access volumes `nexenta_iscsi_target_portal_groups = ` string value NexentaStor target portal groups nexenta_iscsi_target_portal_port = 3260 integer value Nexenta appliance iSCSI target portal port `nexenta_iscsi_target_portals = ` string value Comma separated list of portals for NexentaStor5, in format of IP1:port1,IP2:port2. Port is optional, default=3260. Example: 10.10.10.1:3267,10.10.1.2 nexenta_lu_writebackcache_disabled = False boolean value Postponed write to backing store or not `nexenta_lun_container = ` string value NexentaEdge logical path of bucket for LUNs nexenta_luns_per_target = 100 integer value Amount of LUNs per iSCSI target nexenta_mount_point_base = USDstate_path/mnt string value Base directory that contains NFS share mount points nexenta_nbd_symlinks_dir = /dev/disk/by-path string value NexentaEdge logical path of directory to store symbolic links to NBDs nexenta_nms_cache_volroot = True boolean value If set True cache NexentaStor appliance volroot option value. nexenta_ns5_blocksize = 32 integer value Block size for datasets nexenta_origin_snapshot_template = origin-snapshot-%s string value Template string to generate origin name of clone nexenta_password = nexenta string value Password to connect to NexentaStor management REST API server nexenta_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files nexenta_replication_count = 3 integer value NexentaEdge iSCSI LUN object replication count. `nexenta_rest_address = ` string value IP address of NexentaStor management REST API endpoint nexenta_rest_backoff_factor = 0.5 floating point value Specifies the backoff factor to apply between connection attempts to NexentaStor management REST API server nexenta_rest_connect_timeout = 30 floating point value Specifies the time limit (in seconds), within which the connection to NexentaStor management REST API server must be established nexenta_rest_password = nexenta string value Password to connect to NexentaEdge. nexenta_rest_port = 0 integer value HTTP(S) port to connect to NexentaStor management REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used nexenta_rest_protocol = auto string value Use http or https for NexentaStor management REST API connection (default auto) nexenta_rest_read_timeout = 300 floating point value Specifies the time limit (in seconds), within which NexentaStor management REST API server must send a response nexenta_rest_retry_count = 3 integer value Specifies the number of times to repeat NexentaStor management REST API call in case of connection errors and NexentaStor appliance EBUSY or ENOENT errors nexenta_rest_user = admin string value User name to connect to NexentaEdge. nexenta_rrmgr_compression = 0 integer value Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression. nexenta_rrmgr_connections = 2 integer value Number of TCP connections. nexenta_rrmgr_tcp_buf_size = 4096 integer value TCP Buffer size in KiloBytes. nexenta_shares_config = /etc/cinder/nfs_shares string value File with the list of available nfs shares nexenta_sparse = False boolean value Enables or disables the creation of sparse datasets nexenta_sparsed_volumes = True boolean value Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. nexenta_target_group_prefix = cinder string value Prefix for iSCSI target groups on NexentaStor nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder string value iqn prefix for NexentaStor iSCSI targets nexenta_use_https = True boolean value Use HTTP secure protocol for NexentaStor management REST API connections nexenta_user = admin string value User name to connect to NexentaStor management REST API server nexenta_volume = cinder string value NexentaStor pool name that holds all volumes nexenta_volume_group = iscsi string value Volume group for NexentaStor5 iSCSI nfs_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. At least one attempt will be made to mount an NFS share, regardless of the value specified. nfs_mount_options = None string value Mount options passed to the NFS client. See the NFS(5) man page for details. nfs_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for NFS shares. nfs_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files. nfs_shares_config = /etc/cinder/nfs_shares string value File with the list of available NFS shares. nfs_snapshot_support = False boolean value Enable support for snapshots on the NFS driver. Platforms using libvirt <1.2.7 will encounter issues with this feature. nfs_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space. If set to False volume is created as regular file. In such case volume creation takes a lot of time. nimble_pool_name = default string value Nimble Controller pool name nimble_subnet_label = * string value Nimble Subnet Label nimble_verify_cert_path = None string value Path to Nimble Array SSL certificate nimble_verify_certificate = False boolean value Whether to verify Nimble SSL Certificate num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmeof_conn_info_version = 1 integer value NVMe os-brick connector has 2 different connection info formats, this allows some NVMe-oF drivers that use the original format (version 1), such as spdk and LVM-nvmet, to send the newer format. nvmet_ns_id = 10 integer value Namespace id for the subsystem for the LVM volume when not sharing targets. The minimum id value when sharing.Maximum supported value in Linux is 8192 nvmet_port_id = 1 port value The id of the NVMe target port definition when not sharing targets. The starting port id value when sharing, incremented for each secondary ip address. port_group_load_metric = PercentBusy string value Metric used for port group load calculation. port_load_metric = PercentBusy string value Metric used for port load calculation. powerflex_allow_migration_during_rebuild = False boolean value Allow volume migration during rebuild. powerflex_allow_non_padded_volumes = False boolean value Allow volumes to be created in Storage Pools when zero padding is disabled. This option should not be enabled if multiple tenants will utilize volumes from a shared Storage Pool. powerflex_max_over_subscription_ratio = 10.0 floating point value max_over_subscription_ratio setting for the driver. Maximum value allowed is 10.0. powerflex_rest_server_port = 443 port value Gateway REST server port. powerflex_round_volume_capacity = True boolean value Round volume sizes up to 8GB boundaries. PowerFlex/VxFlex OS requires volumes to be sized in multiples of 8GB. If set to False, volume creation will fail for volumes not sized properly powerflex_server_api_version = None string value PowerFlex/ScaleIO API version. This value should be left as the default value unless otherwise instructed by technical support. powerflex_storage_pools = None string value Storage Pools. Comma separated list of storage pools used to provide volumes. Each pool should be specified as a protection_domain_name:storage_pool_name value powerflex_unmap_volume_before_deletion = False boolean value Unmap volumes before deletion. powermax_array = None string value Serial number of the array to connect to. powermax_array_tag_list = None list value List of user assigned name for storage array. powermax_port_group_name_template = portGroupName string value User defined override for port group name. powermax_port_groups = None list value List of port groups containing frontend ports configured prior for server connection. powermax_service_level = None string value Service level to use for provisioning storage. Setting this as an extra spec in pool_name is preferable. powermax_short_host_name_template = shortHostName string value User defined override for short host name. powermax_srp = None string value Storage resource pool on array to use for provisioning. powerstore_appliances = [] list value Appliances names. Comma separated list of PowerStore appliances names used to provision volumes. Deprecated since: Wallaby *Reason:*Is not used anymore. PowerStore Load Balancer is used to provision volumes instead. powerstore_ports = [] list value Allowed ports. Comma separated list of PowerStore iSCSI IPs or FC WWNs (ex. 58:cc:f0:98:49:22:07:02) to be used. If option is not set all ports are allowed. proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy string value Proxy driver that connects to the IBM Storage Array pure_api_token = None string value REST API authorization token. pure_automatic_max_oversubscription_ratio = True boolean value Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option. pure_eradicate_on_delete = False boolean value When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered. pure_host_personality = None string value Determines how the Purity system tunes the protocol used between the array and the initiator. pure_iscsi_cidr = 0.0.0.0/0 string value CIDR of FlashArray iSCSI targets hosts are allowed to connect to. Default will allow connection to any IPv4 address. This parameter now supports IPv6 subnets. Ignored when pure_iscsi_cidr_list is set. pure_iscsi_cidr_list = None list value Comma-separated list of CIDR of FlashArray iSCSI targets hosts are allowed to connect to. It supports IPv4 and IPv6 subnets. This parameter supersedes pure_iscsi_cidr. pure_replica_interval_default = 3600 integer value Snapshot replication interval in seconds. pure_replica_retention_long_term_default = 7 integer value Retain snapshots per day on target for this time (in days.) pure_replica_retention_long_term_per_day_default = 3 integer value Retain how many snapshots for each day. pure_replica_retention_short_term_default = 14400 integer value Retain all snapshots on target for this time (in seconds.) pure_replication_pg_name = cinder-group string value Pure Protection Group name to use for async replication (will be created if it does not exist). pure_replication_pod_name = cinder-pod string value Pure Pod name to use for sync replication (will be created if it does not exist). pvme_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. pvme_pool_name = A string value Pool or Vdisk name to use for volume creation. qnap_management_url = None uri value The URL to management QNAP Storage. Driver does not support IPv6 address in URL. qnap_poolname = None string value The pool name in the QNAP Storage qnap_storage_protocol = iscsi string value Communication protocol to access QNAP storage quobyte_client_cfg = None string value Path to a Quobyte Client configuration file. quobyte_mount_point_base = USDstate_path/mnt string value Base dir containing the mount point for the Quobyte volume. quobyte_overlay_volumes = False boolean value Create new volumes from the volume_from_snapshot_cache by creating overlay files instead of full copies. This speeds up the creation of volumes from this cache. This feature requires the options quobyte_qcow2_volumes and quobyte_volume_from_snapshot_cache to be set to True. If one of these is set to False this option is ignored. quobyte_qcow2_volumes = True boolean value Create volumes as QCOW2 files rather than raw files. quobyte_sparsed_volumes = True boolean value Create volumes as sparse files which take no space. If set to False, volume is created as regular file. quobyte_volume_from_snapshot_cache = False boolean value Create a cache of volumes from merged snapshots to speed up creation of multiple volumes from a single snapshot. quobyte_volume_url = None string value Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume name> rados_connect_timeout = -1 integer value Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. rados_connection_interval = 5 integer value Interval value (in seconds) between connection retries to ceph cluster. rados_connection_retries = 3 integer value Number of retries if connection to ceph cluster failed. `rbd_ceph_conf = ` string value Path to the ceph configuration file rbd_cluster_name = ceph string value The name of ceph cluster rbd_exclusive_cinder_pool = True boolean value Set to False if the pool is shared with other usages. On exclusive use driver won't query images' provisioned size as they will match the value calculated by the Cinder core code for allocated_capacity_gb. This reduces the load on the Ceph cluster as well as on the volume service. On non exclusive use driver will query the Ceph cluster for per image used disk, this is an intensive operation having an independent request for each image. rbd_flatten_volume_from_snapshot = False boolean value Flatten volumes created from snapshots to remove dependency from volume to snapshot rbd_iscsi_api_debug = False boolean value Enable client request debugging. `rbd_iscsi_api_password = ` string value The username for the rbd_target_api service `rbd_iscsi_api_url = ` string value The url to the rbd_target_api service `rbd_iscsi_api_user = ` string value The username for the rbd_target_api service rbd_iscsi_target_iqn = None string value The preconfigured target_iqn on the iscsi gateway. rbd_max_clone_depth = 5 integer value Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. Note: lowering this value will not affect existing volumes whose clone depth exceeds the new value. rbd_pool = rbd string value The RADOS pool where rbd volumes are stored rbd_secret_uuid = None string value The libvirt uuid of the secret for the rbd_user volumes rbd_store_chunk_size = 4 integer value Volumes will be chunked into objects of this size (in megabytes). rbd_user = None string value The RADOS client name for accessing rbd volumes - only set when using cephx authentication remove_empty_host = False boolean value To remove the host from Unity when the last LUN is detached from it. By default, it is False. replication_connect_timeout = 5 integer value Timeout value (in seconds) used when connecting to ceph cluster to do a demotion/promotion of volumes. If value < 0, no timeout is set and default librados value is used. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_dynamic_total_capacity = True boolean value Set to True for driver to report total capacity as a dynamic value (used + current free) and to False to report a static value (quota max bytes if defined and global size of cluster if not). reserved_percentage = 0 integer value The percentage of backend capacity is reserved retries = 200 integer value Use this value to specify number of retries. san_api_port = None port value Port to use to access the SAN API `san_clustername = ` string value Cluster name to use for creating volumes `san_ip = ` string value IP address of SAN controller san_is_local = False boolean value Execute commands locally instead of over SSH; use if the volume service is running on the SAN device san_login = admin string value Username for SAN controller `san_password = ` string value Password for SAN controller `san_private_key = ` string value Filename of private key to use for SSH authentication san_ssh_port = 22 port value SSH port to use with SAN san_thin_provision = True boolean value Use thin provisioning for SAN volumes? scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. seagate_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. seagate_pool_name = A string value Pool or vdisk name to use for volume creation. seagate_pool_type = virtual string value linear (for vdisk) or virtual (for virtual pool). `secondary_san_ip = ` string value IP address of secondary DSM controller secondary_san_login = Admin string value Secondary DSM user name `secondary_san_password = ` string value Secondary DSM user password name secondary_sc_api_port = 3033 port value Secondary Dell API port sf_account_prefix = None string value Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname ( default behavior). The default is NO prefix. sf_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create sf_api_port = 443 port value SolidFire API port. Useful if the device api is behind a proxy on a different port. sf_api_request_timeout = 30 integer value Sets time in seconds to wait for an api request to complete. sf_cluster_pairing_timeout = 60 integer value Sets time in seconds to wait for clusters to complete pairing. sf_emulate_512 = True boolean value Set 512 byte emulation on volume creation; sf_enable_vag = False boolean value Utilize volume access groups on a per-tenant basis. sf_provisioning_calc = maxProvisionedSpace string value Change how SolidFire reports used space and provisioning calculations. If this parameter is set to usedSpace , the driver will report correct values as expected by Cinder thin provisioning. sf_svip = None string value Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. sf_volume_clone_timeout = 600 integer value Sets time in seconds to wait for a clone of a volume or snapshot to complete. sf_volume_create_timeout = 60 integer value Sets time in seconds to wait for a create volume operation to complete. sf_volume_pairing_timeout = 3600 integer value Sets time in seconds to wait for a migrating volume to complete pairing and sync. sf_volume_prefix = UUID- string value Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of UUID- . smbfs_default_volume_format = vhd string value Default format that will be used when creating volumes if no volume format is specified. smbfs_mount_point_base = C:\OpenStack\_mnt string value Base dir containing mount points for smbfs shares. smbfs_pool_mappings = {} dict value Mappings between share locations and pool names. If not specified, the share names will be used as pool names. Example: //addr/share:pool_name,//addr/share2:pool_name2 smbfs_shares_config = C:\OpenStack\smbfs_shares.txt string value File with the list of available smbfs shares. spdk_max_queue_depth = 64 integer value Queue depth for rdma transport. spdk_rpc_ip = None string value The NVMe target remote configuration IP address. spdk_rpc_password = None string value The NVMe target remote configuration password. spdk_rpc_port = 8000 port value The NVMe target remote configuration port. spdk_rpc_protocol = http string value Protocol to be used with SPDK RPC proxy spdk_rpc_username = None string value The NVMe target remote configuration username. ssh_conn_timeout = 30 integer value SSH connection timeout in seconds ssh_max_pool_conn = 5 integer value Maximum ssh connections in the pool ssh_min_pool_conn = 1 integer value Minimum ssh connections in the pool storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. storage_vnx_authentication_type = global string value VNX authentication scope type. By default, the value is global. storage_vnx_pool_names = None list value Comma-separated list of storage pool names to be used. storage_vnx_security_file_dir = None string value Directory path that contains the VNX security file. Make sure the security file is generated first. storpool_replication = 3 integer value The default StorPool chain replication value. Used when creating a volume with no specified type if storpool_template is not set. Also used for calculating the apparent free space reported in the stats. storpool_template = None string value The StorPool template for volumes with no type. storwize_peer_pool = None string value Specifies the name of the peer pool for hyperswap volume, the peer pool must exist on the other site. storwize_portset = None string value Specifies the name of the portset in which host to be created. storwize_preferred_host_site = {} dict value Specifies the site information for host. One WWPN or multi WWPNs used in the host can be specified. For example: storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or storwize_preferred_host_site=site1:iqn1,site2:iqn2 storwize_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. storwize_svc_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create storwize_svc_flashcopy_rate = 50 integer value Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-150. storwize_svc_flashcopy_timeout = 120 integer value Maximum number of seconds to wait for FlashCopy to be prepared. storwize_svc_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) storwize_svc_mirror_pool = None string value Specifies the name of the pool in which mirrored copy is stored. Example: "pool2" storwize_svc_multihostmap_enabled = True boolean value This option no longer has any affect. It is deprecated and will be removed in the release. storwize_svc_multipath_enabled = False boolean value Connect with multipath (FC only; iSCSI multipath is controlled by Nova) storwize_svc_retain_aux_volume = False boolean value Enable or disable retaining of aux volume on secondary storage during delete of the volume on primary storage or moving the primary volume from mirror to non-mirror with replication enabled. This option is valid for Spectrum Virtualize Family. storwize_svc_stretched_cluster_partner = None string value If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" storwize_svc_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) storwize_svc_vol_compression = False boolean value Storage system compression option for volumes storwize_svc_vol_easytier = True boolean value Enable Easy Tier for volumes storwize_svc_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (8/32/64/128/256) storwize_svc_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. storwize_svc_vol_nofmtdisk = False boolean value Specifies that the volume not be formatted during creation. storwize_svc_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) storwize_svc_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) storwize_svc_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. suppress_requests_ssl_warnings = False boolean value Suppress requests library SSL certificate warnings. synology_admin_port = 5000 port value Management port for Synology storage. synology_device_id = None string value Device id for skip one time password check for logging in Synology storage if OTP is enabled. synology_one_time_pass = None string value One time password of administrator for logging in Synology storage if OTP is enabled. `synology_password = ` string value Password of administrator for logging in Synology storage. `synology_pool_name = ` string value Volume on Synology storage to be used for creating lun. synology_ssl_verify = True boolean value Do certificate validation or not if USDdriver_use_ssl is True synology_username = admin string value Administrator of Synology storage. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. Note: The IET driver is deprecated and will be removed in the V release. target_ip_address = USDmy_ip string value The IP address that the iSCSI/NVMEoF daemon is listening on target_port = 3260 port value The port that the iSCSI/NVMEoF daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI/NVMEoF volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma" or "nvmet_tcp". target_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI/NVMEoF daemon thres_avl_size_perc_start = 20 integer value If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. thres_avl_size_perc_stop = 60 integer value When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. u4p_failover_autofailback = True boolean value If the driver should automatically failback to the primary instance of Unisphere when a successful connection is re-established. u4p_failover_backoff_factor = 1 integer value A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). Retries will sleep for: {backoff factor} * (2 ^ ({number of total retries} - 1)) seconds. u4p_failover_retries = 3 integer value The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. u4p_failover_target = None dict value Dictionary of Unisphere failover target info. u4p_failover_timeout = 20.0 integer value How long to wait for the server to send data before giving up. unique_fqdn_network = True boolean value Whether or not our private network has unique FQDN on each initiator or not. For example networks with QA systems usually have multiple servers/VMs with the same FQDN. When true this will create host entries on 3PAR using the FQDN, when false it will use the reversed IQN/WWNN. unity_io_ports = [] list value A comma-separated list of iSCSI or FC ports to be used. Each port can be Unix-style glob expressions. unity_storage_pool_names = [] list value A comma-separated list of storage pool names to be used. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. vmax_workload = None string value Workload, setting this as an extra spec in pool_name is preferable. vmware_adapter_type = lsiLogic string value Default adapter type to be used for attaching volumes. vmware_api_retry_count = 10 integer value Number of times VMware vCenter server API must be retried upon connection related issues. vmware_ca_file = None string value CA bundle file to use in verifying the vCenter server certificate. vmware_cluster_name = None multi valued Name of a vCenter compute cluster where volumes should be created. vmware_connection_pool_size = 10 integer value Maximum number of connections in http connection pool. vmware_datastore_regex = None string value Regular expression pattern to match the name of datastores where backend volumes are created. vmware_enable_volume_stats = False boolean value If true, this enables the fetching of the volume stats from the backend. This has potential performance issues at scale. When False, the driver will not collect ANY stats about the backend. vmware_host_ip = None string value IP address for connecting to VMware vCenter server. vmware_host_password = None string value Password for authenticating with VMware vCenter server. vmware_host_port = 443 port value Port number for connecting to VMware vCenter server. vmware_host_username = None string value Username for authenticating with VMware vCenter server. vmware_host_version = None string value Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. vmware_image_transfer_timeout_secs = 7200 integer value Timeout in seconds for VMDK volume transfer between Cinder and Glance. vmware_insecure = False boolean value If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if "vmware_ca_file" is set. vmware_lazy_create = True boolean value If true, the backend volume in vCenter server is created lazily when the volume is created without any source. The backend volume is created when the volume is attached, uploaded to image service or during backup. vmware_max_objects_retrieval = 100 integer value Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. vmware_snapshot_format = template string value Volume snapshot format in vCenter server. vmware_storage_profile = None multi valued Names of storage profiles to be monitored. Only used when vmware_enable_volume_stats is True. vmware_task_poll_interval = 2.0 floating point value The interval (in seconds) for polling remote tasks invoked on VMware vCenter server. vmware_tmp_dir = /tmp string value Directory where virtual disks are stored during volume backup and restore. vmware_volume_folder = Volumes string value Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under "OpenStack/<project_folder>", where project_folder is of format "Project (<volume_project_id>)". vmware_wsdl_location = None string value Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl . Optional over-ride to default location for bug work-arounds. vnx_async_migrate = True boolean value Always use asynchronous migration during volume cloning and creating from snapshot. As described in configuration doc, async migration has some constraints. Besides using metadata, customers could use this option to disable async migration. Be aware that async_migrate in metadata overrides this option when both are set. By default, the value is True. volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver string value Driver to use for volume creation volume_group = cinder-volumes string value Name for the VG that will contain exported volumes volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vxflexos_allow_migration_during_rebuild = False boolean value renamed to powerflex_allow_migration_during_rebuild. vxflexos_allow_non_padded_volumes = False boolean value renamed to powerflex_allow_non_padded_volumes. vxflexos_max_over_subscription_ratio = 10.0 floating point value renamed to powerflex_max_over_subscription_ratio. vxflexos_rest_server_port = 443 port value renamed to powerflex_rest_server_port. vxflexos_round_volume_capacity = True boolean value renamed to powerflex_round_volume_capacity. vxflexos_server_api_version = None string value renamed to powerflex_server_api_version. vxflexos_storage_pools = None string value renamed to powerflex_storage_pools. vxflexos_unmap_volume_before_deletion = False boolean value renamed to powerflex_round_volume_capacity. vzstorage_default_volume_format = raw string value Default format that will be used when creating volumes if no volume format is specified. vzstorage_mount_options = None list value Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details. vzstorage_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for vzstorage shares. vzstorage_shares_config = /etc/cinder/vzstorage_shares string value File with the list of available vzstorage shares. vzstorage_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. vzstorage_used_ratio = 0.95 floating point value Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. windows_iscsi_lun_path = C:\iSCSIVirtualDisks string value Path to store VHD backed volumes xtremio_array_busy_retry_count = 5 integer value Number of retries in case array is busy xtremio_array_busy_retry_interval = 5 integer value Interval between retries in case array is busy xtremio_clean_unused_ig = False boolean value Should the driver remove initiator groups with no volumes after the last connection was terminated. Since the behavior till now was to leave the IG be, we default to False (not deleting IGs without connected volumes); setting this parameter to True will remove any IG after terminating its connection to the last volume. `xtremio_cluster_name = ` string value XMS cluster id in multi-cluster environment xtremio_ports = [] list value Allowed ports. Comma separated list of XtremIO iSCSI IPs or FC WWNs (ex. 58:cc:f0:98:49:22:07:02) to be used. If option is not set all ports are allowed. xtremio_volumes_per_glance_cache = 100 integer value Number of volumes created from each cached glance image zadara_access_key = None string value VPSA access key zadara_default_snap_policy = False boolean value VPSA - Attach snapshot policy for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_gen3_vol_compress = False boolean value VPSA - Enable compression for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_gen3_vol_dedupe = False boolean value VPSA - Enable deduplication for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_ssl_cert_verify = True boolean value If set to True the http client will validate the SSL certificate of the VPSA endpoint. zadara_vol_encrypt = False boolean value VPSA - Default encryption policy for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_vpsa_host = None host address value VPSA - Management Host name or IP address zadara_vpsa_poolname = None string value VPSA - Storage Pool assigned for volumes zadara_vpsa_port = None port value VPSA - Port number zadara_vpsa_use_ssl = False boolean value VPSA - Use SSL connection 2.1.4. barbican The following table outlines the options available under the [barbican] group in the /etc/cinder/cinder.conf file. Table 2.3. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. verify_ssl_path = None string value A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. 2.1.5. brcd_fabric_example The following table outlines the options available under the [brcd_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.4. brcd_fabric_example Configuration option = Default value Type Description `fc_fabric_address = ` string value Management IP of fabric. `fc_fabric_password = ` string value Password for user. fc_fabric_port = 22 port value Connecting port `fc_fabric_ssh_cert_path = ` string value Local SSH certificate Path. `fc_fabric_user = ` string value Fabric user ID. fc_southbound_protocol = REST_HTTP string value South bound connector for the fabric. fc_virtual_fabric_id = None string value Virtual Fabric ID. zone_activate = True boolean value Overridden zoning activation state. zone_name_prefix = openstack string value Overridden zone name prefix. zoning_policy = initiator-target string value Overridden zoning policy. 2.1.6. cisco_fabric_example The following table outlines the options available under the [cisco_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.5. cisco_fabric_example Configuration option = Default value Type Description `cisco_fc_fabric_address = ` string value Management IP of fabric `cisco_fc_fabric_password = ` string value Password for user cisco_fc_fabric_port = 22 port value Connecting port `cisco_fc_fabric_user = ` string value Fabric user ID cisco_zone_activate = True boolean value overridden zoning activation state cisco_zone_name_prefix = None string value overridden zone name prefix cisco_zoning_policy = initiator-target string value overridden zoning policy cisco_zoning_vsan = None string value VSAN of the Fabric 2.1.7. coordination The following table outlines the options available under the [coordination] group in the /etc/cinder/cinder.conf file. Table 2.6. coordination Configuration option = Default value Type Description backend_url = file://USDstate_path string value The backend URL to use for distributed coordination. 2.1.8. cors The following table outlines the options available under the [cors] group in the /etc/cinder/cinder.conf file. Table 2.7. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID', 'X-Trace-Info', 'X-Trace-HMAC', 'OpenStack-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH', 'HEAD'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-API-Version'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 2.1.9. database The following table outlines the options available under the [database] group in the /etc/cinder/cinder.conf file. Table 2.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 2.1.10. fc-zone-manager The following table outlines the options available under the [fc-zone-manager] group in the /etc/cinder/cinder.conf file. Table 2.9. fc-zone-manager Configuration option = Default value Type Description brcd_sb_connector = HTTP string value South bound connector for zoning operation cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI string value Southbound connector for zoning operation enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. fc_fabric_names = None string value Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService string value FC SAN Lookup Service zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver string value FC Zone Driver responsible for zone management zoning_policy = initiator-target string value Zoning policy configured by user; valid values include "initiator-target" or "initiator" 2.1.11. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/cinder/cinder.conf file. Table 2.10. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 2.1.12. key_manager The following table outlines the options available under the [key_manager] group in the /etc/cinder/cinder.conf file. Table 2.11. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. fixed_key = None string value Fixed key returned by key manager, specified in hex password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 2.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/cinder/cinder.conf file. Table 2.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 2.1.14. nova The following table outlines the options available under the [nova] group in the /etc/cinder/cinder.conf file. Table 2.13. nova Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. insecure = False boolean value Verify HTTPS connections. interface = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. keyfile = None string value PEM encoded client certificate key file region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. timeout = None integer value Timeout value for http requests token_auth_url = None string value The authentication URL for the nova connection when using the current users token 2.1.15. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/cinder/cinder.conf file. Table 2.14. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 2.1.16. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/cinder/cinder.conf file. Table 2.15. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 2.1.17. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/cinder/cinder.conf file. Table 2.16. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 2.1.18. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/cinder/cinder.conf file. Table 2.17. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 2.1.19. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/cinder/cinder.conf file. Table 2.18. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.20. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/cinder/cinder.conf file. Table 2.19. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 2.1.21. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/cinder/cinder.conf file. Table 2.20. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 2.1.22. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/cinder/cinder.conf file. Table 2.21. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 2.1.23. oslo_versionedobjects The following table outlines the options available under the [oslo_versionedobjects] group in the /etc/cinder/cinder.conf file. Table 2.22. oslo_versionedobjects Configuration option = Default value Type Description fatal_exception_format_errors = False boolean value Make exception message format errors fatal 2.1.24. privsep The following table outlines the options available under the [privsep] group in the /etc/cinder/cinder.conf file. Table 2.23. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. logger_name = oslo_privsep.daemon string value Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 2.1.25. profiler The following table outlines the options available under the [profiler] group in the /etc/cinder/cinder.conf file. Table 2.24. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 2.1.26. sample_castellan_source The following table outlines the options available under the [sample_castellan_source] group in the /etc/cinder/cinder.conf file. Table 2.25. sample_castellan_source Configuration option = Default value Type Description config_file = None string value The path to a castellan configuration file. driver = None string value The name of the driver that can load this configuration source. mapping_file = None string value The path to a configuration/castellan_id mapping file. 2.1.27. sample_remote_file_source The following table outlines the options available under the [sample_remote_file_source] group in the /etc/cinder/cinder.conf file. Table 2.26. sample_remote_file_source Configuration option = Default value Type Description ca_path = None string value The path to a CA_BUNDLE file or directory with certificates of trusted CAs. client_cert = None string value Client side certificate, as a single file path containing either the certificate only or the private key and the certificate. client_key = None string value Client side private key, in case client_cert is specified but does not includes the private key. driver = None string value The name of the driver that can load this configuration source. uri = None uri value Required option with the URI of the extra configuration file's location. 2.1.28. service_user The following table outlines the options available under the [service_user] group in the /etc/cinder/cinder.conf file. Table 2.27. service_user Configuration option = Default value Type Description auth-url = None string value Authentication URL cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to send_service_user_token = False boolean value When True, if sending a user token to an REST API, also send a service token. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 2.1.29. ssl The following table outlines the options available under the [ssl] group in the /etc/cinder/cinder.conf file. Table 2.28. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.30. vault The following table outlines the options available under the [vault] group in the /etc/cinder/cinder.conf file. Table 2.29. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret kv_version = 2 integer value Version of KV store in Vault to use, for example: 2 root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/cinder |
Installing on GCP | Installing on GCP OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/index |
Scalability and performance | Scalability and performance OpenShift Container Platform 4.7 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/index |
Chapter 8. JBoss EAP MBean Services | Chapter 8. JBoss EAP MBean Services A managed bean, sometimes simply referred to as an MBean, is a type of JavaBean that is created with dependency injection. MBean services are the core building blocks of the JBoss EAP server. 8.1. Writing JBoss MBean Services Writing a custom MBean service that relies on a JBoss service requires the service interface method pattern. A JBoss MBean service interface method pattern consists of a set of life cycle operations that inform an MBean service when it can create , start , stop , and destroy itself. You can manage the dependency state using any of the following approaches: If you want specific methods to be called on your MBean, declare those methods in your MBean interface. This approach allows your MBean implementation to avoid dependencies on JBoss specific classes. If you are not bothered about dependencies on JBoss specific classes, then you can have your MBean interface extend the ServiceMBean interface and ServiceMBeanSupport class. The ServiceMBeanSupport class provides implementations of the service lifecycle methods like create, start, and stop. To handle a specific event like the start() event, you need to override startService() method provided by the ServiceMBeanSupport class. 8.1.1. A Standard MBean Example This section develops two example MBean services packaged together in a service archive ( .sar ). ConfigServiceMBean interface declares specific methods like the start , getTimeout , and stop methods to start , hold , and stop the MBean correctly without using any JBoss specific classes. ConfigService class implements ConfigServiceMBean interface and consequently implements the methods used within that interface. The PlainThread class extends the ServiceMBeanSupport class and implements the PlainThreadMBean interface. PlainThread starts a thread and uses ConfigServiceMBean.getTimeout() to determine how long the thread should sleep. Example: MBean Services Class package org.jboss.example.mbean.support; public interface ConfigServiceMBean { int getTimeout(); void start(); void stop(); } package org.jboss.example.mbean.support; public class ConfigService implements ConfigServiceMBean { int timeout; @Override public int getTimeout() { return timeout; } @Override public void start() { //Create a random number between 3000 and 6000 milliseconds timeout = (int)Math.round(Math.random() * 3000) + 3000; System.out.println("Random timeout set to " + timeout + " seconds"); } @Override public void stop() { timeout = 0; } } package org.jboss.example.mbean.support; import org.jboss.system.ServiceMBean; public interface PlainThreadMBean extends ServiceMBean { void setConfigService(ConfigServiceMBean configServiceMBean); } package org.jboss.example.mbean.support; import org.jboss.system.ServiceMBeanSupport; public class PlainThread extends ServiceMBeanSupport implements PlainThreadMBean { private ConfigServiceMBean configService; private Thread thread; private volatile boolean done; @Override public void setConfigService(ConfigServiceMBean configService) { this.configService = configService; } @Override protected void startService() throws Exception { System.out.println("Starting Plain Thread MBean"); done = false; thread = new Thread(new Runnable() { @Override public void run() { try { while (!done) { System.out.println("Sleeping...."); Thread.sleep(configService.getTimeout()); System.out.println("Slept!"); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }); thread.start(); } @Override protected void stopService() throws Exception { System.out.println("Stopping Plain Thread MBean"); done = true; } } The jboss-service.xml descriptor shows how the ConfigService class is injected into the PlainThread class using the inject tag. The inject tag establishes a dependency between PlainThreadMBean and ConfigServiceMBean , and thus allows PlainThreadMBean to use ConfigServiceMBean easily. Example: jboss-service.xml Service Descriptor <server xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:jboss:service:7.0 jboss-service_7_0.xsd" xmlns="urn:jboss:service:7.0"> <mbean code="org.jboss.example.mbean.support.ConfigService" name="jboss.support:name=ConfigBean"/> <mbean code="org.jboss.example.mbean.support.PlainThread" name="jboss.support:name=ThreadBean"> <attribute name="configService"> <inject bean="jboss.support:name=ConfigBean"/> </attribute> </mbean> </server> After writing the MBeans example, you can package the classes and the jboss-service.xml descriptor in the META-INF/ folder of a service archive ( .sar ). 8.2. Deploying JBoss MBean Services Example: Deploy and Test MBeans in a Managed Domain Use the following command to deploy the example MBeans ( ServiceMBeanTest.sar ) in a managed domain: Example: Deploy and Test MBeans on a Standalone Server Use the following command to build and deploy the example MBeans ( ServiceMBeanTest.sar ) on a standalone server: Example: Undeploy the MBeans Archive Use the following command to undeploy the MBeans example: | [
"package org.jboss.example.mbean.support; public interface ConfigServiceMBean { int getTimeout(); void start(); void stop(); } package org.jboss.example.mbean.support; public class ConfigService implements ConfigServiceMBean { int timeout; @Override public int getTimeout() { return timeout; } @Override public void start() { //Create a random number between 3000 and 6000 milliseconds timeout = (int)Math.round(Math.random() * 3000) + 3000; System.out.println(\"Random timeout set to \" + timeout + \" seconds\"); } @Override public void stop() { timeout = 0; } } package org.jboss.example.mbean.support; import org.jboss.system.ServiceMBean; public interface PlainThreadMBean extends ServiceMBean { void setConfigService(ConfigServiceMBean configServiceMBean); } package org.jboss.example.mbean.support; import org.jboss.system.ServiceMBeanSupport; public class PlainThread extends ServiceMBeanSupport implements PlainThreadMBean { private ConfigServiceMBean configService; private Thread thread; private volatile boolean done; @Override public void setConfigService(ConfigServiceMBean configService) { this.configService = configService; } @Override protected void startService() throws Exception { System.out.println(\"Starting Plain Thread MBean\"); done = false; thread = new Thread(new Runnable() { @Override public void run() { try { while (!done) { System.out.println(\"Sleeping....\"); Thread.sleep(configService.getTimeout()); System.out.println(\"Slept!\"); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }); thread.start(); } @Override protected void stopService() throws Exception { System.out.println(\"Stopping Plain Thread MBean\"); done = true; } }",
"<server xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:jboss:service:7.0 jboss-service_7_0.xsd\" xmlns=\"urn:jboss:service:7.0\"> <mbean code=\"org.jboss.example.mbean.support.ConfigService\" name=\"jboss.support:name=ConfigBean\"/> <mbean code=\"org.jboss.example.mbean.support.PlainThread\" name=\"jboss.support:name=ThreadBean\"> <attribute name=\"configService\"> <inject bean=\"jboss.support:name=ConfigBean\"/> </attribute> </mbean> </server>",
"deploy ~/Desktop/ServiceMBeanTest.sar --all-server-groups",
"deploy ~/Desktop/ServiceMBeanTest.sar",
"undeploy ServiceMBeanTest.sar"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/jboss_eap_mbean_services |
Appendix B. ASN.1 and Distinguished Names | Appendix B. ASN.1 and Distinguished Names Abstract The OSI Abstract Syntax Notation One (ASN.1) and X.500 Distinguished Names play an important role in the security standards that define X.509 certificates and LDAP directories. B.1. ASN.1 Overview The Abstract Syntax Notation One (ASN.1) was defined by the OSI standards body in the early 1980s to provide a way of defining data types and structures that are independent of any particular machine hardware or programming language. In many ways, ASN.1 can be considered a forerunner of modern interface definition languages, such as the OMG's IDL and WSDL, which are concerned with defining platform-independent data types. ASN.1 is important, because it is widely used in the definition of standards (for example, SNMP, X.509, and LDAP). In particular, ASN.1 is ubiquitous in the field of security standards. The formal definitions of X.509 certificates and distinguished names are described using ASN.1 syntax. You do not require detailed knowledge of ASN.1 syntax to use these security standards, but you need to be aware that ASN.1 is used for the basic definitions of most security-related data types. BER The OSI's Basic Encoding Rules (BER) define how to translate an ASN.1 data type into a sequence of octets (binary representation). The role played by BER with respect to ASN.1 is, therefore, similar to the role played by GIOP with respect to the OMG IDL. DER The OSI's Distinguished Encoding Rules (DER) are a specialization of the BER. The DER consists of the BER plus some additional rules to ensure that the encoding is unique (BER encodings are not). References You can read more about ASN.1 in the following standards documents: ASN.1 is defined in X.208. BER is defined in X.209. B.2. Distinguished Names Overview Historically, distinguished names (DN) are defined as the primary keys in an X.500 directory structure. However, DNs have come to be used in many other contexts as general purpose identifiers. In Apache CXF, DNs occur in the following contexts: X.509 certificates-for example, one of the DNs in a certificate identifies the owner of the certificate (the security principal). LDAP-DNs are used to locate objects in an LDAP directory tree. String representation of DN Although a DN is formally defined in ASN.1, there is also an LDAP standard that defines a UTF-8 string representation of a DN (see RFC 2253 ). The string representation provides a convenient basis for describing the structure of a DN. Note The string representation of a DN does not provide a unique representation of DER-encoded DN. Hence, a DN that is converted from string format back to DER format does not always recover the original DER encoding. DN string example The following string is a typical example of a DN: Structure of a DN string A DN string is built up from the following basic elements: OID . Attribute Types . AVA . RDN . OID An OBJECT IDENTIFIER (OID) is a sequence of bytes that uniquely identifies a grammatical construct in ASN.1. Attribute types The variety of attribute types that can appear in a DN is theoretically open-ended, but in practice only a small subset of attribute types are used. Table B.1, "Commonly Used Attribute Types" shows a selection of the attribute types that you are most likely to encounter: Table B.1. Commonly Used Attribute Types String Representation X.500 Attribute Type Size of Data Equivalent OID C countryName 2 2.5.4.6 O organizationName 1... 64 2.5.4.10 OU organizationalUnitName 1... 64 2.5.4.11 CN commonName 1... 64 2.5.4.3 ST stateOrProvinceName 1... 64 2.5.4.8 L localityName 1... 64 2.5.4.7 STREET streetAddress DC domainComponent UID userid AVA An attribute value assertion (AVA) assigns an attribute value to an attribute type. In the string representation, it has the following syntax: For example: Alternatively, you can use the equivalent OID to identify the attribute type in the string representation (see Table B.1, "Commonly Used Attribute Types" ). For example: RDN A relative distinguished name (RDN) represents a single node of a DN (the bit that appears between the commas in the string representation). Technically, an RDN might contain more than one AVA (it is formally defined as a set of AVAs). However, this almost never occurs in practice. In the string representation, an RDN has the following syntax: Here is an example of a (very unlikely) multiple-value RDN: Here is an example of a single-value RDN: | [
"C=US,O=IONA Technologies,OU=Engineering,CN=A. N. Other",
"<attr-type> = <attr-value>",
"CN=A. N. Other",
"2.5.4.3=A. N. Other",
"<attr-type> = <attr-value>[ + <attr-type> =<attr-value> ...]",
"OU=Eng1+OU=Eng2+OU=Eng3",
"OU=Engineering"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/DN |
Chapter 1. Introduction to the Load-balancing service | Chapter 1. Introduction to the Load-balancing service The Load-balancing service (octavia) provides a Load Balancing-as-a-Service (LBaaS) API version 2 implementation for Red Hat OpenStack Services on OpenShift (RHOSO) environments. The Load-balancing service manages multiple virtual machines, containers, or bare metal servers- collectively known as amphorae- which it launches on demand. The ability to provide on-demand, horizontal scaling makes the Load-balancing service a fully-featured load balancer that is appropriate for large RHOSO enterprise deployments. Section 1.1, "Load-balancing service components" Section 1.2, "Load-balancing service object model" Section 1.3, "Uses of load balancing in RHOSO" 1.1. Load-balancing service components The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) uses a set of VM instances referred to as amphorae that reside on the Compute nodes. The Load-balancing service controllers communicate with the amphorae over a load-balancing management network ( lb-mgmt-net ). When using octavia, you can create load-balancer virtual IPs (VIPs) that do not require floating IPs (FIPs). Not using FIPs has the advantage of improving performance through the load balancer. Figure 1.1. Load-balancing service components Figure 1.1 shows the components of the Load-balancing service are hosted on the same nodes as the Networking API server, which by default, is on the Red Hat OpenShift worker nodes that host the RHOSO control plane. The Load-balancing service consists of the following components: Octavia API ( octavia-api pods) Provides the REST API for users to interact with octavia. Controller Worker ( octavia-worker pods) Sends configuration and configuration updates to amphorae over the load-balancing management network. Health Manager ( octavia-healthmanager pods) Monitors the health of individual amphorae and handles failover events if an amphora encounters a failure. Housekeeping Manager ( octavia-housekeeping pods) Cleans up deleted database records, and manages amphora certificate rotation. Driver agent (included within the octavia-api pods) Supports other provider drivers, such as OVN. Amphora Performs the load balancing. Amphorae are typically instances that run on Compute nodes that you configure with load balancing parameters according to the listener, pool, health monitor, L7 policies, and members' configuration. Amphorae send a periodic heartbeat to the Health Manager. 1.2. Load-balancing service object model The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) uses a typical load-balancing object model. Figure 1.2. Load-balancing service object model diagram Load balancer The top API object that represents the load-balancing entity. The VIP address is allocated when you create the load balancer. When you use the amphora provider to create the load balancer one or more amphora instances launch on one or more Compute nodes. Listener The port on which the load balancer listens, for example, TCP port 80 for HTTP. Listeners also support TLS-terminated HTTPS load balancers. Health Monitor A process that performs periodic health checks on each back-end member server to pre-emptively detect failed servers and temporarily remove them from the pool. Pool A group of members that handle client requests from the load balancer. You can associate pools with more than one listener by using the API. You can share pools with L7 policies. Member Describes how to connect to the back-end instances or services. This description consists of the IP address and network port on which the back end member is available. L7 Rule Defines the layer 7 (L7) conditions that determine whether an L7 policy applies to the connection. L7 Policy A collection of L7 rules associated with a listener, and which might also have an association to a back-end pool. Policies describe actions that the load balancer takes if all of the rules in the policy are true. Additional resources Section 1.1, "Load-balancing service components" 1.3. Uses of load balancing in RHOSO Load balancing is essential for enabling simple or automatic delivery scaling and availability for cloud deployments. The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) depends on other RHOSO services: Compute service (nova) - For managing the Load-balancing service VM instance (amphora) lifecycle, and creating compute resources on demand. Networking service (neutron) - For network connectivity between amphorae, tenant environments, and external networks. Key Manager service (barbican) - For managing TLS certificates and credentials, when TLS session termination is configured on a listener. Identity service (keystone) - For authentication requests to the octavia API, and for the Load-balancing service to authenticate with other RHOSO services. Image service (glance) - For storing the amphora virtual machine image. The Load-balancing service interacts with the other RHOSO services through a driver interface. The driver interface avoids major restructuring of the Load-balancing service if an external component requires replacement with a functionally-equivalent service. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/understand-lb-service_rhoso-lbaas |
Chapter 8. Known Issues | Chapter 8. Known Issues This chapter documents known problems in Red Hat Enterprise Linux 7.7. 8.1. Authentication and Interoperability Inconsistent warning message when applying an ID range change In RHEL Identity Management (IdM), you can define multiple identity ranges (ID ranges) associated with a local IdM domain or a trusted Active Directory domain. The information about ID ranges is retrieved by the SSSD daemon on all enrolled systems. A change to ID range properties requires restart of SSSD. Previously, there was no warning about the need to restart SSSD. RHEL 7.7 adds a warning that is displayed when ID range properties are modified in a way that requires restart of SSSD. The warning message currently uses inconsistent wording. The purpose of the warning message is to ask for a restart of SSSD on any IdM system that consumes the ID range. To learn more about ID ranges, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-unique_uid_and_gid_attributes ( BZ#1631826 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 8.2. Compiler and Tools GCC thread sanitizer included in RHEL no longer works Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL. As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer. (BZ#1569484) Context variables in SystemTap not always accessible The generation of debug information in the GCC compiler has some limitations. As a consequence, when analyzing the resulting executable files with the SystemTap tool, context variables listed in the form USDfoo are often inaccessible. To work around this limitation, add the -P option to the USDHOME/.systemtap/rc file. This causes SystemTap to always select prologue-searching heuristics. As a result, some of the context variables can become accessible. (BZ#1714480) ksh with the KEYBD trap mishandles multibyte characters The Korn Shell (KSH) is unable to correctly handle multibyte characters when the KEYBD trap is enabled. Consequently, when the user enters, for example, Japanese characters, ksh displays an incorrect string. To work around this problem, disable the KEYBD trap in the /etc/kshrc file by commenting out the following line: For more details, see a related Knowledgebase solution . ( BZ#1503922 ) Error while upgrading PCP from the RHEL 7.6 version When you upgrade the pcp packages from the RHEL 7.6 to the RHEL 7.7 version, yum returns the following error message: It is safe to ignore this harmless message, which is caused by a bug in the RHEL 7.6 build of pcp and not by the updated package. The PCP functionality in RHEL 7.7 is not affected. ( BZ#1781692 ) 8.3. Desktop Gnome Documents cannot display some documents when installed without LibreOffice Gnome Documents uses libraries provided by the LibreOffice suite for rendering certain types of documents, such as OpenDocument Text or Open Office XML formats. However, the required libreoffice-filters libraries are missing from the dependency list of the gnome-documents package. Therefore, if you install Gnome Documents on a system that does not have LibreOffice , these document types cannot be rendered. To work around this problem, install the libreoffice-filters package manually, even if you do not plan to use LibreOffice itself. ( BZ#1695699 ) GNOME Software cannot install packages from unsigned repositories GNOME Software cannot install packages from repositories that have the following setting in the *.repo file: If you attempt to install a package from such repository, GNOME software fails with a generic error. Currently, there is no workaround available. ( BZ#1591270 ) Nautilus does not hide icons in the GNOME Classic Session The GNOME Tweak Tool setting to show or hide icons in the GNOME session, where the icons are hidden by default, is ignored in the GNOME Classic Session. As a result, it is not possible to hide icons in the GNOME Classic Session even though the GNOME Tweak Tool displays this option. ( BZ#1474852 ) 8.4. Installation and Booting RHEL 7.7 and later installations add spectre_v2=retpoline to Intel Cascade Lake systems RHEL 7.7 and later installations add the spectre_v2=retpoline kernel parameter to Intel Cascade Lake systems, and as a consequence, system performance is affected. To work around this problem and ensure the best performance, complete the following steps. Remove the kernel boot parameter on Intel Cascade Lake systems: Reboot the system: (BZ#1767612) 8.5. Kernel RHEL 7 virtual machines sometimes fail to boot on ESXi 5.5 When running Red Hat Enterprise Linux 7 guests with 12 GB RAM or above on a VMware ESXi 5.5 hypervisor, certain components currently initialize with incorrect memory type range register (MTRR) values or incorrectly reconfigure MTRR values across boots. This sometimes causes the guest kernel to panic or the guest to become unresponsive during boot. To work around this problem, add the disable_mtrr_trim option to the guest's kernel command line, which enables the guest to continue booting when MTRRs are configured incorrectly. Note that with this option, the guest prints WARNING: BIOS bug messages during boot, which you can safely ignore. (BZ#1429792) Certain NIC firmware can become unresponsive with bnx2x Due to a bug in the unload sequence of the pre-boot drivers, the firmware of some internet adapters can become unresponsive after the bnx2x driver takes over the device. The bnx2x driver detects the problem and returns the message "storm stats were not updated for 3 times" in the kernel log. To work around this problem, apply the latest NIC firmware updates provided by your hardware vendor. As a result, unloading of the pre-boot firmware now works as expected and the firmware no longer hangs after bnx2x takes over the device. (BZ#1315400) The i40iw module does not load automatically on boot Some i40e NICs do not support iWarp and the i40iw module does not fully support suspend and resume operations. Consequently, the i40iw module is not automatically loaded by default to ensure suspend and resume operations work properly. To work around this problem, edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with an i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1622413) The non-interleaved persistent memory configurations cannot use storage Previously, systems with persistent memory aligned to 64 MB boundaries, prevented creating of namespaces. As a consequence, the non-interleaved persistent memory configurations in some cases were not able to use storage. To work around this problem, use the interleaved mode for the persistent memory. As a result, most of the storage is available for use, however, with limited fault isolation. (BZ#1691868) System boot might fail due to persistent memory file systems Systems with a large amount of persistent memory take a long time to boot. If the /etc/fstab file configures persistent memory file systems, the system might time out waiting for the devices to become available. The boot process then fails and presents the user with an emergency prompt. To work around the problem, increase the DefaultTimeoutStartSec value in the /etc/systemd/system.conf file. Use a sufficiently large value, such as 1200s . As a result, the system boot no longer times out. (BZ#1666535) radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon terminates unexpectedly, which causes the rest of the kdump service to fail. To work around this bug, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Afterwards, restart the machine and kdump . Note that in this scenario, no graphics will be available during kdump , but kdump will complete successfully. (BZ#1509444) Certain eBPF tools can cause the system to become unresponsive on IBM Z Due to a bug in the JIT compiler, running certain eBPF tools contained in the bcc-tools package on IBM Z might cause the system to become unresponsive. To work around this problem, avoid using the dcsnoop , runqlen , and slabratetop tools from bcc-tools on IBM Z until a fix is released. (BZ#1724027) Concurrent SG_IO requests in /dev/sg might cause data corruption The /dev/sg device driver is missing synchronization of kernel data. Concurrent requests in the driver access the same data at the same time. As a consequence, the ioctl system call might sometimes erroneously use the payload of an SG_IO request for a different command that was sent at the same time as the correct one. This might lead to disk corruption in certain cases. Red Hat has observed this bug in Red Hat Virtualization (RHV). To work around the problem, use either of the following solutions: Do not send concurrent requests to the /dev/sg driver. As a result, each SG_IO request sent to /dev/sg is guaranteed to use the correct data. Alternatively, use the /dev/sd or the /dev/bsg drivers instead of /dev/sg . The bug is not present in these drivers. (BZ#1710533) Incorrect order for inner and outer VLAN tags The system receives the inner and outer VLAN tags in a swapped order when using QinQ (IEEE802.1Q in IEEE802.1Q standard) over representor devices when using the mlx5 driver. That happens because the rxvlan offloading switch is not effective on this path and it causes Open vSwitch (OVS) to push this error forward. There is no known workaround. (BZ#1701502) kdump fails to generate vmcore on Azure instances in RHEL 7 An underlying problem with the serial console implementation on Azure instances booted through the UEFI bootloader causes that the kdump kernel is unable to boot. Consequently, the vmcore of the crashed kernel cannot be captured in the /var/crash/ directory. To work around this problem: Add the console=ttyS0 and earlyprintk=ttyS0 parameters to the KDUMP_COMMANDLINE_REMOVE command line in the /etc/sysconfig/kdump directory. Restart the kdump service. As a result, the kdump kernel should correctly boot and vmcore is expected to be captured upon crash. Make sure there is enough space in /var/crash/ to save the vmcore, which can be up to the size of system memory. (BZ#1724993) The kdumpctl service fails to load crash kernel if KASLR is enabled An inappropriate setting of the kptr_restrict kernel tunable causes that contents of the /proc/kcore file are generated as all zeros. As a consequence, the kdumpctl service is not able to access /proc/kcore and to load the crash kernel if Kernel Address Space Layout Randomization (KASLR) is enabled. To work around this problem, keep kptr_restrict set to 1 . As a result, kdumpctl is able to load the crash kernel in the described scenario. For details, refer to the /usr/share/doc/kexec-tools/kexec-kdump-howto.txt file. (BZ#1600148) Kdump fails in the second kernel The kdump initramfs archive is a critical component for capturing the crash dump. However, it is strictly generated for the machine it runs on and has no generality. If you did a disk migration or installed a new machine with a disk image, kdump might fail in the second kernel. To work around this problem, if you did a disk migration, rebuild initramfs manually by running the following commands: # touch /etc/kdump.conf # kdumpctl restart If you are creating a disk image for installing new machines, it is strongly recommended not to include the kdump initramfs in the disk image. It helps to save space and kdump will build the initramfs automatically if it is missing. (BZ#1723492) 8.6. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) Booting from a network device fails when the network driver is restarted Currently, if the boot device is mounted over the network when using iSCSI or Fibre Channel over Ethernet (FCoE), Red Hat Enterprise Linux (RHEL) fails to boot when the underlying network interface driver is restarted. For example, RHEL restarts the bnx2x network driver when the libvirt service starts its first virtual network and enables IP forwarding. To work around the problem in this specific example, enable IPv4 forwarding earlier in the boot sequence: Note that this workaround works only in the mentioned scenario. (BZ#1574536) freeradius might fail when upgrading from RHEL 7.3 A new configuration property, correct_escapes , in the /etc/raddb/radiusd.conf file was introduced in the freeradius version distributed since RHEL 7.4. When an administrator sets correct_escapes to true , the new regular expression syntax for backslash escaping is expected. If correct_escapes is set to false , the old syntax is expected where backslashes are also escaped. For backward compatibility reasons, false is the default value. When upgrading, configuration files in the /etc/raddb/ directory are overwritten unless modified by the administrator, so the value of correct_escapes might not always correspond to which type of syntax is used in all the configuration files. As a consequence, authentication with freeradius might fail. To prevent the problem from occurring, after upgrading from freeradius version 3.0.4 (distributed with RHEL 7.3) and earlier, make sure all configuration files in the /etc/raddb/ directory use the new escaping syntax (no double backslash characters can be found) and that the value of correct_escapes in /etc/raddb/radiusd.conf is set to true . For more information and examples, see the solution Authentication with Freeradius fails since upgrade to version >= 3.0.5 . (BZ#1489758) RHEL 7 shows the status of an 802.3ad bond as "Churned" after a switch was unavailable for an extended period of time Currently, when you configure an 802.3ad network bond and the switch is down for an extended period of time, Red Hat Enterprise Linux properly shows the status of the bond as "Churned", even after the connection returns to a working state. However, this is the intended behavior, as the "Churned" status aims to tell the administrator that a significant link outage occurred. To clear this status, restart the network bond or reboot the host. (BZ#1708807) Using client-identifier leads to IP address conflict If the client-identifier option is used, certain network switches ignore the ciaddr field of a dynamic host configuration protocol (DHCP) request. Consequently, the same IP address is assigned to multiple clients, which leads to an IP address conflict. To work around the problem, include the following line in the dhclient.conf file: As a result, the IP address conflict does not occur under the described circumstances. ( BZ#1193799 ) 8.7. Security Libreswan does not work properly with seccomp=enabled on all configurations The set of allowed syscalls in the Libreswan SECCOMP support implementation is currently not complete. Consequently, when SECCOMP is enabled in the ipsec.conf file, the syscall filtering rejects even syscalls needed for the proper functioning of the pluto daemon; the daemon is killed, and the ipsec service is restarted. To work around this problem, set the seccomp= option back to the disabled state. SECCOMP support must remain disabled to run ipsec properly. ( BZ#1544463 ) PKCS#11 devices not supporting RSA-PSS cannot be used with TLS 1.3 The TLS protocol version 1.3 requires RSA-PSS signatures, which are not supported by all PKCS#11 devices, such as hardware security modules (HSM) or smart cards. Currently, server applications using NSS do not check the PKCS#11 module capabilities before negotiating TLS 1.3. As a consequence, attempts to authenticate using PKCS#11 devices that do not support RSA-PSS fail. To work around this problem, use TLS 1.2 instead. ( BZ#1711438 ) TLS 1.3 does not work in NSS in FIPS mode TLS 1.3 is not supported on systems working in FIPS mode. As a consequence, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode. To enable the connections, disable the system's FIPS mode or enable support for TLS 1.2 in the peer. (BZ#1710372) OpenSCAP inadvertently accesses remote file systems The OpenSCAP scanner cannot correctly detect whether the scanned file system is a mounted remote file system or a local file system, and the detection part contains also other bugs. Consequently, the scanner reads mounted remote file systems even if an evaluated rule applies to a local file-system only, and it might generate unwanted traffic on remote file systems. To work around this problem, unmount remote file systems before scanning. Another option is to exclude affected rules from the evaluated profile by providing a tailoring file. ( BZ#1694962 ) 8.8. Servers and Services Manual initialization of MariaDB using mysql_install_db fails The mysql_install_db script for initializing the MariaDB database calls the resolveip binary from the /usr/libexec/ directory, while the binary is located in /usr/bin/ . Consequently, manual initialization of the database using mysql_install_db fails. To work around this problem, create a symbolic link to the actual location of the resolveip binary: When the symlink is created, mysql_install_db successfully locates resolveip , and the manual database initialization is successful. Alternatively, use mysql_install_db with the --rpm option. In this case, mysql_install_db does not call the resolveip binary, and therefore does not fail. (BZ#1731062) mysql-connector-java does not work with MySQL 8.0 The mysql-connector-java database connector provided in RHEL 7 does not work with the MySQL 8.0 database server. To work around this problem, use the rh-mariadb103-mariadb-java-client database connector from Red Hat Software Collections. ( BZ#1646363 ) Harmless error messages occur when the balanced Tuned profile is used The balanced Tuned profile has been changed in the way that the cpufreq_conservative kernel module loads when this profile is applied. However, cpufreq_conservative is built-in in the kernel, and it is not available as a module. Consequently, when the balanced profile is used, the following error messages occasionally appear in /var/log/tuned/tuned.log file: Such error messages are harmless, so you can safely ignore them. However, to eliminate the errors, you can override the balanced profile, so that Tuned does not attempt to load the kernel module. For example, create the /etc/tuned/balanced/tuned.conf file with the following contents: ( BZ#1719160 ) The php-mysqlnd database connector does not work with MySQL 8.0 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: ( BZ#1646158 ) 8.9. Storage The system halts unexpectedly when using scsi-mq with software FCoE The host system halts unexpectedly when it is configured to use both multiqueue scheduling ( scsi-mq ) and software Fibre Channel over Ethernet (FCoE) at the same time. To work around the problem, disable scsi-mq when using software FCoE. As a result, the system no longer crashes. (BZ#1712664) The system boot sometimes fails on large systems During the boot process, the udev device manager sometimes generates too many rules on large systems. For example, the problem has manifested on a system with 32 TB of memory and 192 CPUs. As a consequence, the boot process becomes unresponsive or times out and switches to the emergency shell. To work around the problem, add the udev.children-max=1000 option to the kernel command line. You can experiment with different values of udev.children-max to see which value results in the fastest boot on your system. As a result, the system boots successfully. (BZ#1722855) When an image is split off from an active/active cluster mirror, the resulting new logical volume has no active component When you split off an image from an active/active cluster mirror, the resulting new logical appears active but it has no active component. To activate the newly split-off logical volume, deactivate the volume and then activate it with the following commands: ( BZ#1642162 ) 8.10. Virtualization Virtual machines sometimes enable unnecessary CPU vulnerability mitigation Currently, the MDS_NO CPU flags, which indicate that the CPU is not vulnerable to the Microarchitectural Data Sampling (MDS) vulnerability, are not exposed to guest operating systems. As a consequence, the guest operating system in some cases automatically enables CPU vulnerability mitigation features that are not necessary for the current host. If the host CPU is known not to be vulnerable to MDS and the virtual machine is not going to be migrated to hosts vulnerable to MDS, MDS vulnerability mitigation can be disabled in Linux guests by using the "mds=off" kernel command-line option. Note, however, that this option disables all MDS mitigations on the guest. Therefore, it should be used with care and should never be used if the host CPU is vulnerable to MDS. (BZ#1708465) Modifying a RHEL 8 virtual image on a RHEL 7 host sometimes fails On RHEL 7 hosts, using virtual image manipulation utilities such as guestfish , virt-sysprep , or virt-customize in some cases fails if the utility targets a virtual image that is using a RHEL 8 file system. This is because RHEL 7 is not fully compatible with certain file-system features in RHEL 8. To work around the problem, you can disable the problematic features when creating the guest file systems using the mkfs utility: For XFS file systems, use the "-m reflink=0" option. For ext4 file systems, use the "-O ^metadata_csum" option. Alternatively, use a RHEL 8 host instead of a RHEL 7 one, where the affected utilities will work as expected. (BZ#1667478) Slow connection to RHEL 7 guest console on a Windows Server 2019 host When using RHEL 7 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently takes significantly longer than expected. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706522) SMT works only on AMD EPYC CPU models Currently, only the AMD EPYC CPU models support the simultaneous multithreading (SMT) feature. As a consequence, manually enabling the topoext feature when configuring a virtual machine (VM) with a different CPU model causes the VM not to detect the vCPU topology correctly, and the vCPU does not perform as configured. To work around this problem, do not enable topoext manually and do not use the threads vCPU option on AMD hosts unless the host is using the AMD EPYC model ( BZ#1615682 ) | [
"trap keybd_trap KEYBD",
"Failed to resolve allow statement at /etc/selinux/targeted/tmp/modules/400/pcpupstream/cil:83 semodule: Failed!",
"gpgcheck=0",
"grubby --remove-args=\"spectre_v2=retpoline\" --update-kernel=DEFAULT",
"reboot",
"dracut_args --omit-drivers \"radeon\"",
"Environment=OPENSSL_ENABLE_MD5_VERIFY=1",
"echo 'net.ipv4.ip_forward = 1' > /etc/sysctl.d/90-forwarding.conf dracut -f",
"send dhcp-client-identifier = \"\";",
"ln -s /usr/bin/resolveip /usr/libexec/resolveip",
"tuned.utils.commands: Executing modinfo error: modinfo: ERROR: Module cpufreq_conservative not found. tuned.plugins.plugin_modules: kernel module 'cpufreq_conservative' not found, skipping it tuned.plugins.plugin_modules: verify: failed: 'module 'cpufreq_conservative' is not loaded'",
"[main] include=balanced [modules] enabled=0",
"[mysqld] character-set-server=utf8",
"lvchange -an _vg_/_newly_split_lv_ lvchange -ay _vg_/_newly_split_lv_"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.7_release_notes/known_issues |
Chapter 21. Desktop and graphics | Chapter 21. Desktop and graphics 21.1. GNOME Shell is the default desktop environment RHEL 8 is distributed with GNOME Shell as the default desktop environment. All packages related to KDE Plasma Workspaces (KDE) have been removed, and it is no longer possible to use KDE as an alternative to the default GNOME desktop environment. Red Hat does not support migration from RHEL 7 with KDE to RHEL 8 GNOME. Users of RHEL 7 with KDE are recommended to back up their data and install RHEL 8 with GNOME Shell. 21.2. Notable changes in GNOME Shell RHEL 8 is distributed with GNOME Shell, version 3.28. This section: Highlights enhancements related to GNOME Shell, version 3.28. Informs about the change in default combination of GNOME Shell environment and display protocol. Explains how to access features that are not available by default. Explains changes in GNOME tools for software management. 21.2.1. GNOME Shell, version 3.28 in RHEL 8 GNOME Shell, version 3.28 is available in RHEL 8. Notable enhancements include: New GNOME Boxes features New on-screen keyboard Extended devices support, most significantly integration for the Thunderbolt 3 interface Improvements for GNOME Software, dconf-editor and GNOME Terminal 21.2.2. GNOME Shell environments GNOME 3 provides two essential environments: GNOME Standard GNOME Classic Both environments can use two different protocols to build a graphical user interface: The X11 protocol, which uses X.Org as the display server. The Wayland protocol, which uses GNOME Shell as the Wayland compositor and display server. This solution of display server is further referred as GNOME Shell on Wayland . The default combination in RHEL 8 is GNOME Standard environment using GNOME Shell on Wayland as the display server. However, you may want to switch to another combination of GNOME Shell environment and graphics protocol stack. For more information, see Section 21.3, "Selecting GNOME environment and display protocol" . Additional resources For more information about basics of using both GNOME Shell environments, see Overview of GNOME environments . 21.2.3. Desktop icons In RHEL 8, the Desktop icons functionality is no longer provided by the Nautilus file manager, but by the desktop icons gnome-shell extension. To be able to use the extension, you must install the gnome-shell-extension-desktop-icons package available in the Appstream repository. Additional resources For more information about Desktop icons in RHEL 8, see Managing desktop icons . 21.2.4. Fractional scaling On a GNOME Shell on Wayland session, the fractional scaling feature is available. The feature makes it possible to scale the GUI by fractions, which improves the appearance of scaled GUI on certain displays. Note that the feature is currently considered experimental and is, therefore, disabled by default. To enable fractional scaling, run the following command: 21.2.5. GNOME Software for package management The gnome-packagekit package that provided a collection of tools for package management in graphical environment on RHEL 7 is no longer available. On RHEL 8, similar functionality is provided by the GNOME Software utility, which enables you to install and update applications and gnome-shell extensions. GNOME Software is distributed in the gnome-software package. Additional resources For more information for installing applications with GNOME software , see Installing applications in GNOME . 21.2.6. Opening graphical applications with sudo When attempting to open a graphical application in a terminal using the sudo command, you must do the following: X11 applications If the application uses the X11 display protocol, add the local user root in the X server access control list. As a result, root is allowed to connect to Xwayland , which translates the X11 protocol into the Wayland protocol and reversely. Example 21.1. Adding root to the X server access control list to open xclock with sudo USD xhost +si:localuser:root USD sudo xclock Wayland applications If the application is Wayland native, include the -E option. Example 21.2. Opening GNOME Calculator with sudo USD sudo -E gnome-calculator Otherwise, if you type just sudo and the name of the application, the operation of opening the application fails with the following error message: 21.3. Selecting GNOME environment and display protocol For switching between various combinations of GNOME environment and graphics protocol stacks, use the following procedure. Procedure From the login screen (GDM), click the gear button to the Sign In button. Note You cannot access this option from the lock screen. The login screen appears when you first start RHEL 8 or when you log out of your current session. From the drop-down menu that appears, select the option that you prefer. Note Note that in the menu that appears on the login screen, the X.Org display server is marked as X11 display server. Important The change of GNOME environment and graphics protocol stack resulting from the above procedure is persistent across user logouts, and also when powering off or rebooting the computer. 21.4. Removed functionality gnome-terminal removed support for non-UTF8 locales in RHEL 8 The gnome-terminal application in RHEL 8 and later releases refuses to start when the system locale is set to non-UTF8 because only UTF8 locales are supported. For more information, see the The gnome-terminal application fails to start when the system locale is set to non-UTF8 Knowledgebase article. | [
"gsettings set org.gnome.mutter experimental-features \"['scale-monitor-framebuffer']\"",
"No protocol specified Unable to init server: could not connect: connection refused Failed to parse arguments: Cannot open display"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/desktop-and-graphics_considerations-in-adopting-RHEL-8 |
10.6. Methods | 10.6. Methods 10.6.1. Creating a Cluster Creation of a new cluster requires the name , cpu id= and datacenter elements. Identify the datacenter with either the id attribute or name element. Example 10.3. Creating a cluster 10.6.2. Updating a Cluster The name , description , cpu id= and error_handling elements are updatable post-creation. Example 10.4. Updating a cluster 10.6.3. Removing a Cluster Removal of a cluster requires a DELETE request. Example 10.5. Removing a cluster | [
"POST /ovirt-engine/api/clusters HTTP/1.1 Accept: application/xml Content-type: application/xml <cluster> <name>cluster1</name> <cpu id=\"Intel Penryn Family\"/> <data_center id=\"00000000-0000-0000-0000-000000000000\"/> </cluster>",
"PUT /ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000 HTTP/1.1 Accept: application/xml Content-type: application/xml <cluster> <description>Cluster 1</description> </cluster>",
"DELETE /ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000 HTTP/1.1 HTTP/1.1 204 No Content"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-methods12 |
Chapter 27. Tracing latencies using ftrace | Chapter 27. Tracing latencies using ftrace The ftrace utility is one of the diagnostic facilities provided with the RHEL for Real Time kernel. ftrace can be used by developers to analyze and debug latency and performance issues that occur outside of the user-space. The ftrace utility has a variety of options that allow you to use the utility in different ways. It can be used to trace context switches, measure the time it takes for a high-priority task to wake up, the length of time interrupts are disabled, or list all the kernel functions executed during a given period. Some of the ftrace tracers, such as the function tracer, can produce exceedingly large amounts of data, which can turn trace log analysis into a time-consuming task. However, you can instruct the tracer to begin and end only when the application reaches critical code paths. Prerequisites You have administrator privileges. 27.1. Using the ftrace utility to trace latencies You can trace latencies using the ftrace utility. Procedure View the available tracers on the system. The user interface for ftrace is a series of files within debugfs . The ftrace files are also located in the /sys/kernel/debug/tracing/ directory. Move to the /sys/kernel/debug/tracing/ directory. The files in this directory can only be modified by the root user, because enabling tracing can have an impact on the performance of the system. To start a tracing session: Select a tracer you want to use from the list of available tracers in /sys/kernel/debug/tracing/available_tracers . Insert the name of the selector into the /sys/kernel/debug/tracing/current_tracer . Note If you use a single '>' with the echo command, it will override any existing value in the file. If you wish to append the value to the file, use '>>' instead. The function-trace option is useful because tracing latencies with wakeup_rt , preemptirqsoff , and so on automatically enables function tracing , which may exaggerate the overhead. Check if function and function_graph tracing are enabled: A value of 1 indicates that function and function_graph tracing are enabled. A value of 0 indicates that function and function_graph tracing are disabled. By default, function and function_graph tracing are enabled. To turn function and function_graph tracing on or off, echo the appropriate value to the /sys/kernel/debug/tracing/options/function-trace file. Important When using the echo command, ensure you place a space character in between the value and the > character. At the shell prompt, using 0> , 1> , and 2> (without a space character) refers to standard input, standard output, and standard error. Using them by mistake could result in an unexpected trace output. Adjust the details and parameters of the tracers by changing the values for the various files in the /debugfs/tracing/ directory. For example: The irqsoff , preemptoff , preempirqsoff , and wakeup tracers continuously monitor latencies. When they record a latency greater than the one recorded in tracing_max_latency the trace of that latency is recorded, and tracing_max_latency is updated to the new maximum time. In this way, tracing_max_latency always shows the highest recorded latency since it was last reset. To reset the maximum latency, echo 0 into the tracing_max_latency file: To see only latencies greater than a set amount, echo the amount in microseconds: When the tracing threshold is set, it overrides the maximum latency setting. When a latency is recorded that is greater than the threshold, it will be recorded regardless of the maximum latency. When reviewing the trace file, only the last recorded latency is shown. To set the threshold, echo the number of microseconds above which latencies must be recorded: View the trace logs: To store the trace logs, copy them to another file: View the functions being traced: Filter the functions being traced by editing the settings in /sys/kernel/debug/tracing/set_ftrace_filter . If no filters are specified in the file, all functions are traced. To change filter settings, echo the name of the function to be traced. The filter allows the use of a ' * ' wildcard at the beginning or end of a search term. For examples, see ftrace examples . 27.2. ftrace files The following are the main files in the /sys/kernel/debug/tracing/ directory. ftrace files trace The file that shows the output of an ftrace trace. This is really a snapshot of the trace in time, because the trace stops when this file is read, and it does not consume the events read. That is, if the user disabled tracing and reads this file, it will report the same thing every time it is read. trace_pipe The file that shows the output of an ftrace trace as it reads the trace live. It is a producer/consumer trace. That is, each read will consume the event that is read. This can be used to read an active trace without stopping the trace as it is read. available_tracers A list of ftrace tracers that have been compiled into the kernel. current_tracer Enables or disables an ftrace tracer. events A directory that contains events to trace and can be used to enable or disable events, as well as set filters for the events. tracing_on Disable and enable recording to the ftrace buffer. Disabling tracing via the tracing_on file does not disable the actual tracing that is happening inside the kernel. It only disables writing to the buffer. The work to do the trace still happens, but the data does not go anywhere. 27.3. ftrace tracers Depending on how the kernel is configured, not all tracers may be available for a given kernel. For the RHEL for Real Time kernels, the trace and debug kernels have different tracers than the production kernel does. This is because some of the tracers have a noticeable overhead when the tracer is configured into the kernel, but not active. Those tracers are only enabled for the trace and debug kernels. Tracers function One of the most widely applicable tracers. Traces the function calls within the kernel. This can cause noticeable overhead depending on the number of functions traced. When not active, it creates little overhead. function_graph The function_graph tracer is designed to present results in a more visually appealing format. This tracer also traces the exit of the function, displaying a flow of function calls in the kernel. Note This tracer has more overhead than the function tracer when enabled, but the same low overhead when disabled. wakeup A full CPU tracer that reports the activity happening across all CPUs. It records the time that it takes to wake up the highest priority task in the system, whether that task is a real time task or not. Recording the max time it takes to wake up a non-real time task hides the times it takes to wake up a real time task. wakeup_rt A full CPU tracer that reports the activity happening across all CPUs. It records the time that it takes from the current highest priority task to wake up to until the time it is scheduled. This tracer only records the time for real time tasks. preemptirqsoff Traces the areas that disable preemption or interrupts, and records the maximum amount of time for which preemption or interrupts were disabled. preemptoff Similar to the preemptirqsoff tracer, but traces only the maximum interval for which pre-emption was disabled. irqsoff Similar to the preemptirqsoff tracer, but traces only the maximum interval for which interrupts were disabled. nop The default tracer. It does not provide any tracing facility itself, but as events may interleave into any tracer, the nop tracer is used for specific interest in tracing events. 27.4. ftrace examples The following provides a number of examples for changing the filtering of functions being traced. You can use the * wildcard at both the beginning and end of a word. For example: *irq\* will select all functions that contain irq in the name. The wildcard cannot, however, be used inside a word. Encasing the search term and the wildcard character in double quotation marks ensures that the shell will not attempt to expand the search to the present working directory. Examples of filters Trace only the schedule function: Trace all functions that end with lock : Trace all functions that start with spin_ : Trace all functions with cpu in the name: | [
"cat /sys/kernel/debug/tracing/available_tracers function_graph wakeup_rt wakeup preemptirqsoff preemptoff irqsoff function nop",
"cd /sys/kernel/debug/tracing",
"echo preemptoff > /sys/kernel/debug/tracing/current_tracer",
"cat /sys/kernel/debug/tracing/options/function-trace 1",
"echo 0 > /sys/kernel/debug/tracing/options/function-trace echo 1 > /sys/kernel/debug/tracing/options/function-trace",
"echo 0 > /sys/kernel/debug/tracing/tracing_max_latency",
"echo 200 > /sys/kernel/debug/tracing/tracing_max_latency",
"echo 200 > /sys/kernel/debug/tracing/tracing_thresh",
"cat /sys/kernel/debug/tracing/trace",
"cat /sys/kernel/debug/tracing/trace > /tmp/lat_trace_log",
"cat /sys/kernel/debug/tracing/set_ftrace_filter",
"echo schedule > /sys/kernel/debug/tracing/set_ftrace_filter",
"echo \"*lock\" > /sys/kernel/debug/tracing/set_ftrace_filter",
"echo \"spin_*\" > /sys/kernel/debug/tracing/set_ftrace_filter",
"echo \"cpu\" > /sys/kernel/debug/tracing/set_ftrace_filter"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_using-the-ftrace-utility-to-trace-latencies_optimizing-RHEL9-for-real-time-for-low-latency-operation |
Chapter 44. Kafka Sink | Chapter 44. Kafka Sink Send data to Kafka topics. The Kamelet is able to understand the following headers to be set: key / ce-key : as message key partition-key / ce-partitionkey : as message partition key Both the headers are optional. 44.1. Configuration Options The following table summarizes the configuration options available for the kafka-sink Kamelet: Property Name Description Type Default Example bootstrapServers * Brokers Comma separated list of Kafka Broker URLs string password * Password Password to authenticate to kafka string topic * Topic Names Comma separated list of Kafka topic names string user * Username Username to authenticate to Kafka string saslMechanism SASL Mechanism The Simple Authentication and Security Layer (SASL) Mechanism used. string "PLAIN" securityProtocol Security Protocol Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported string "SASL_SSL" Note Fields marked with an asterisk (*) are mandatory. 44.2. Dependencies At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies: camel:kafka camel:kamelet 44.3. Usage This section describes how you can use the kafka-sink . 44.3.1. Knative Sink You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.1.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.3.2. Kafka Sink You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.2.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/kafka-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"",
"apply -f kafka-sink-binding.yaml",
"kamel bind channel:mychannel kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"",
"apply -f kafka-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/kafka-sink |
5.5. Load Balancing Policy: Power_Saving | 5.5. Load Balancing Policy: Power_Saving Figure 5.2. Power Saving Scheduling Policy A power saving load balancing policy selects the host for a new virtual machine according to lowest CPU or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time is defined by the power saving scheduling policy's parameters. Beyond these limits the environment's performance will degrade. The power saving parameters also define the minimum CPU load and maximum available memory allowed for hosts in a cluster for a set amount of time before the continued operation of a host is considered an inefficient use of electricity. If a host has reached the maximum CPU load or minimum available memory and stays there for more than the set time, the virtual machines on that host are migrated one by one to the host that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit. If the host's CPU load falls below the defined minimum level or the host's available memory rises above the defined maximum level the virtual machines on that host are migrated to other hosts in the cluster as long as the other hosts in the cluster remain below maximum CPU load and above minimum available memory. When an under-utilized host is cleared of its remaining virtual machines, the Manager will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/load_balancing_policy_power_saving |
Chapter 32. Authentication and Interoperability | Chapter 32. Authentication and Interoperability Kerberos ticket requests are refused for short lifetimes Due to a bug in Active Directory, Kerberos ticket requests for short (generally below three minutes) lifetimes, are refused. To work around this problem, request longer-lived (above five minutes) tickets instead. Replication from a Red Hat Enterprise Linux 7 machine to a Red Hat Enterprise Linux 6 machine fails Currently, the Camellia Kerberos encryption types (enctypes) are included as possible default enctypes in the krb5, krb5-libs, krb5-server packages. As a consequence, replication from a Red Hat Enterprise Linux 7 machine to a Red Hat Enterprise Linux 6 machine fails with an error message. To work around this problem, use the default enctype controls, or tell kadmin or ipa-getkeytab which encryption types to use. A harmless error message is logged on SSSD startup If SSSD is connected to an IdM server that does not have a trust relationship established with an AD server, the following harmless error message is printed to the SSSD domain log on startup: Internal Error (Memory buffer error) To prevent the harmless error message from occurring, set subdomains_provider to none in the sssd.conf file if the environment does not expect setting any trusted domains. DNS zones with recently generated DNSSEC keys are not signed properly IdM does not properly sign DNS zones with recently generated DNS Security Extensions (DNSSEC) keys. The named-pkcs11 service logs the following error in this situation: The attribute does not exist: 0x00000002 The bug is caused by a race condition error in the DNSSEC key generation and distribution process. The race condition prevents named-pkcs11 from accessing new DNSSEC keys. To work around this problem, restart named-pkcs11 on the affected server. After the restart, the DNS zone is properly signed. Note that the bug might reappear after the DNSSEC keys are changed again. The old realmd version is started when updating realmd while it is running The realmd daemon starts only when requested, then performs a given action, and after some time it times out. When realmd is updated while it is still running, the old version of realmd starts upon a request because realmd is not restarted after the update. To work around this problem, make sure that reamld is not running before updating it. ipa-server-install and ipa-replica-install do not validate their options The ipa-server-install and ipa-replica-install utilities do currently not validate the options supplied to them. If the user passes incorrect values to the utilities, the installation fails. To work around the problem, make sure to supply correct values, and then run the utilities again. Upgrading the ipa packages fails if the required openssl version is not installed When the user attempts to upgrade the ipa packages, Identity Management (IdM) does not automatically install the required version of the openssl packages. Consequently, if the 1.0.1e-42 version of openssl is not installed before the user runs the yum update ipa* command, the upgrade fails during the DNSKeySync service configuration. To work around this problem, update openssl manually to version 1.0.1e-42 or later before updating ipa . This prevents the upgrade failure. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/known-issues-authentication_and_interoperability |
Chapter 2. Installing the Virtualization Packages | Chapter 2. Installing the Virtualization Packages To use virtualization, Red Hat virtualization packages must be installed on your computer. Virtualization packages can be installed when installing Red Hat Enterprise Linux or after installation using the yum command and the Subscription Manager application. The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 2.1. Installing Virtualization Packages During a Red Hat Enterprise Linux Installation This section provides information about installing virtualization packages while installing Red Hat Enterprise Linux. Note For detailed information about installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . Important The Anaconda interface only offers the option to install Red Hat virtualization packages during the installation of Red Hat Enterprise Linux Server. When installing a Red Hat Enterprise Linux Workstation, the Red Hat virtualization packages can only be installed after the workstation installation is complete. See Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" Procedure 2.1. Installing virtualization packages Select software Follow the installation procedure until the Installation Summary screen. Figure 2.1. The Installation Summary screen In the Installation Summary screen, click Software Selection . The Software Selection screen opens. Select the server type and package groups You can install Red Hat Enterprise Linux 7 with only the basic virtualization packages or with packages that allow management of guests through a graphical user interface. Do one of the following: Install a minimal virtualization host Select the Virtualization Host radio button in the Base Environment pane and the Virtualization Platform check box in the Add-Ons for Selected Environment pane. This installs a basic virtualization environment which can be run with virsh or remotely over the network. Figure 2.2. Virtualization Host selected in the Software Selection screen Install a virtualization host with a graphical user interface Select the Server with GUI radio button in the Base Environment pane and the Virtualization Client , Virtualization Hypervisor , and Virtualization Tools check boxes in the Add-Ons for Selected Environment pane. This installs a virtualization environment along with graphical tools for installing and managing guest virtual machines. Figure 2.3. Server with GUI selected in the software selection screen Finalize installation Click Done and continue with the installation. Important You need a valid Red Hat Enterprise Linux subscription to receive updates for the virtualization packages. 2.1.1. Installing KVM Packages with Kickstart Files To use a Kickstart file to install Red Hat Enterprise Linux with the virtualization packages, append the following package groups in the %packages section of your Kickstart file: For more information about installing with Kickstart files, see the Red Hat Enterprise Linux 7 Installation Guide . | [
"@virtualization-hypervisor @virtualization-client @virtualization-platform @virtualization-tools"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-installing_the_virtualization_packages |
Chapter 8. Known issues in Red Hat Decision Manager 7.13.1 | Chapter 8. Known issues in Red Hat Decision Manager 7.13.1 This section lists known issues with Red Hat Decision Manager 7.13.1. 8.1. Business Central Unable to deploy Business Central using JDK version 11.0.16 [ RHPAM-4497 ] Issue: It is not possible to deploy Business Central if your installation uses JDK version 11.0.16. Actual result: Business Central does not deploy when launched. Expected result: Business Central deploys successfully. Workaround: Use a JDK version such as 11.0.5 or earlier. 8.2. Form Modeler Date type process variable is empty when the process is started using Business Central form with the showTime set to false [ RHPAM-4514 ] Issue: When you use the default form rendering in Business Central and the process variable field has showTime=false , the started process instance shows that the variable is empty. The affected types are java.time.LocalDateTime , java.time.LocalDate , java.time.LocalTime , and java.util.Date . Steps to reproduce: Define the process variable with a specific type. Generate a form. Open a form and set showTime=false for a specified field. Deploy the project. Open the process form. Specify the value in the process form. Check the process instance variables. The value for the specified variable is empty. Workaround: None. Form in KIE Server with a java.util.Date field does not allow the time to be inserted [ RHPAM-4513 ] Issue: When a process has a variable of type java.util.Date , the generated form, if the showTime attribute is true , does not allow inserting the time part. Then after submitting the Date variable shows all zeros in the time part of the datatype. Workaround: None. 8.3. Red Hat OpenShift Container Platform PostgreSQL 13 Pod won't start because of an incompatible data dirctory [ RHPAM-4464 ] Issue: When you start a PostgreSQL pod after you upgrade the operator, the pod fails to start and you receive the following message: Incompatible data directory. This container image provides PostgreSQL '13', but data directory is of version '10'. This image supports automatic data directory upgrade from '12', please carefully consult image documentation about how to use the 'USDPOSTGRESQL_UPGRADE' startup option. Workaround: Check the version of PostgreSQL: If the PostgreSQL version returned is 12.x or earlier, upgrade PostgreSQL: Red Hat Decision Manager version PostgreSQL version Upgrade instructions 7.13.1 7.10 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 7.13.2 7.10 1. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 2. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. 7.13.2 7.12 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. Verify that PostpreSQL has been upgraded to your required version: | [
"postgres -V",
"postgres -V"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/rn-7.13.1-known-issues-ref |
Chapter 1. Customizing nodes | Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.13.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. For example, the threshold value of 2 in the following configuration can be reached by accessing the two Tang servers, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.13.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 4 openshift: fips: true 5 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. 5 OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.13.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 10 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 11 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 12 Include this directive to enable FIPS mode on your cluster. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . Important If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane config. Create a control plane or compute node manifest from the corresponding Butane config and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configs in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.13.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.13.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.13.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.13.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 4 openshift: fips: true 5",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.13.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.13.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.13.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_configuration/installing-customizing |
probe::nfs.fop.write | probe::nfs.fop.write Name probe::nfs.fop.write - NFS client write operation Synopsis nfs.fop.write Values devname block device name Description SystemTap uses the vfs.do_sync_write probe to implement this probe and as a result will get operations other than the NFS client write operations. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-write |
Chapter 5. LimitRange [v1] | Chapter 5. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LimitRangeSpec defines a min/max usage limit for resources that match on kind. 5.1.1. .spec Description LimitRangeSpec defines a min/max usage limit for resources that match on kind. Type object Required limits Property Type Description limits array Limits is the list of LimitRangeItem objects that are enforced. limits[] object LimitRangeItem defines a min/max usage limit for any resource that matches on kind. 5.1.2. .spec.limits Description Limits is the list of LimitRangeItem objects that are enforced. Type array 5.1.3. .spec.limits[] Description LimitRangeItem defines a min/max usage limit for any resource that matches on kind. Type object Required type Property Type Description default object (Quantity) Default resource requirement limit value by resource name if resource limit is omitted. defaultRequest object (Quantity) DefaultRequest is the default resource requirement request value by resource name if resource request is omitted. max object (Quantity) Max usage constraints on this kind by resource name. maxLimitRequestRatio object (Quantity) MaxLimitRequestRatio if specified, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value; this represents the max burst for the named resource. min object (Quantity) Min usage constraints on this kind by resource name. type string Type of resource that this limit applies to. 5.2. API endpoints The following API endpoints are available: /api/v1/limitranges GET : list or watch objects of kind LimitRange /api/v1/watch/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges DELETE : delete collection of LimitRange GET : list or watch objects of kind LimitRange POST : create a LimitRange /api/v1/watch/namespaces/{namespace}/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges/{name} DELETE : delete a LimitRange GET : read the specified LimitRange PATCH : partially update the specified LimitRange PUT : replace the specified LimitRange /api/v1/watch/namespaces/{namespace}/limitranges/{name} GET : watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/limitranges Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind LimitRange Table 5.2. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/limitranges Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/limitranges Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of LimitRange Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind LimitRange Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty HTTP method POST Description create a LimitRange Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body LimitRange schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 202 - Accepted LimitRange schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/limitranges Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/limitranges/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the LimitRange namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a LimitRange Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified LimitRange Table 5.23. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified LimitRange Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified LimitRange Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body LimitRange schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/limitranges/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the LimitRange namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/schedule_and_quota_apis/limitrange-v1 |
Chapter 111. Salesforce | Chapter 111. Salesforce Both producer and consumer are supported This component supports producer and consumer endpoints to communicate with Salesforce using Java DTOs. There is a companion maven plugin Camel Salesforce Plugin that generates these DTOs (see further below). 111.1. Dependencies When using salesforce with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency> By default, camel-salesforce-maven-plugin uses TLSv1.3 to interact with salesforce. TLS version is configurable on the plugin. FIPS users can configure the property sslContextParameters.secureSocketProtocol. To use the maven-plugin you must add the following dependency to the pom.xml file. <plugin> <groupId>org.apache.camel.maven</groupId> <artifactId>camel-salesforce-maven-plugin</artifactId> <version>USD{camel-community.version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <clientId>USD{camelSalesforce.clientId}</clientId> <clientSecret>USD{camelSalesforce.clientSecret}</clientSecret> <userName>USD{camelSalesforce.userName}</userName> <password>USD{camelSalesforce.password}</password> <sslContextParameters> <secureSocketProtocol>TLSv1.2</secureSocketProtocol> </sslContextParameters> <includes> <include>Contact</include> </includes> </configuration> </execution> </executions> </plugin> Where camel-community.version refers to the corresponding Camel community version that you use when working with camel-salesforce-maven-plugin . For example, for Red Hat build of Camel Spring Boot version 4.4.0 you can use '4.4.0' version of Apache Camel. 111.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 111.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 111.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 111.3. Component Options The Salesforce component supports 90 options, which are listed below. Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient httpClientConnectionTimeout (common) Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 long httpClientIdleTimeout (common) Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 long httpMaxContentLength (common) Max content length of an HTTP response. Integer httpRequestBufferSize (common) HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper packages (common) In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean config (common (advanced)) Global endpoint configuration - use to set values that are common to all endpoints. SalesforceEndpointConfig httpClientProperties (common (advanced)) Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map longPollingTransportProperties (common (advanced)) Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map workerPoolMaxSize (common (advanced)) Maximum size of the thread pool used to handle HTTP responses. 20 int workerPoolSize (common (advanced)) Size of the thread pool used to handle HTTP responses. 10 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean httpProxyExcludedAddresses (proxy) A list of addresses for which HTTP proxy server should not be used. Set httpProxyHost (proxy) Hostname of the HTTP proxy server to use. String httpProxyIncludedAddresses (proxy) A list of addresses for which HTTP proxy server should be used. Set httpProxyPort (proxy) Port number of the HTTP proxy server to use. Integer httpProxySocks4 (proxy) If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false boolean authenticationType (security) Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. Enum values: USERNAME_PASSWORD REFRESH_TOKEN JWT AuthenticationType clientId (security) Required OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String clientSecret (security) OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String httpProxyAuthUri (security) Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String httpProxyPassword (security) Password to use to authenticate against the HTTP proxy server. String httpProxyRealm (security) Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String httpProxySecure (security) If set to false disables the use of TLS when accessing the HTTP proxy. true boolean httpProxyUseDigestAuth (security) If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false boolean httpProxyUsername (security) Username to use to authenticate against the HTTP proxy server. String instanceUrl (security) URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String jwtAudience (security) Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String keystore (security) KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. KeyStoreParameters lazyLogin (security) If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false boolean loginConfig (security) All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. SalesforceLoginConfig loginUrl (security) Required URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com . https://login.salesforce.com String password (security) Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String refreshToken (security) Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String sslContextParameters (security) SSL parameters to use, see SSLContextParameters class for all available options. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean userName (security) Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String 111.4. Endpoint Options The Salesforce endpoint is configured using URI syntax: with the following path and query parameters: 111.4.1. Path Parameters (2 parameters) Name Description Default Type operationName (producer) The operation to use. Enum values: getVersions getResources getGlobalObjects getBasicInfo getDescription getSObject createSObject updateSObject deleteSObject getSObjectWithId upsertSObject deleteSObjectWithId getBlobField query queryMore queryAll search apexCall recent createJob getJob closeJob abortJob createBatch getBatch getAllBatches getRequest getResults createBatchQuery getQueryResultIds getQueryResult getRecentReports getReportDescription executeSyncReport executeAsyncReport getReportInstances getReportResults limits approval approvals composite-tree composite-batch composite compositeRetrieveSObjectCollections compositeCreateSObjectCollections compositeUpdateSObjectCollections compositeUpsertSObjectCollections compositeDeleteSObjectCollections bulk2GetAllJobs bulk2CreateJob bulk2GetJob bulk2CreateBatch bulk2CloseJob bulk2AbortJob bulk2DeleteJob bulk2GetSuccessfulResults bulk2GetFailedResults bulk2GetUnprocessedRecords bulk2CreateQueryJob bulk2GetQueryJob bulk2GetAllQueryJobs bulk2GetQueryJobResults bulk2AbortQueryJob bulk2DeleteQueryJob raw OperationName topicName (consumer) The name of the topic/channel to use. String 111.4.2. Query Parameters (57 parameters) Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean replayId (consumer) The replayId value to use when subscribing. Long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String 111.5. Authenticating to Salesforce The component supports three OAuth authentication flows: OAuth 2.0 Username-Password Flow OAuth 2.0 Refresh Token Flow OAuth 2.0 JWT Bearer Token Flow For each of the flow different set of properties needs to be set: Table 111.1. Table 1. Properties to set for each authentication flow Property Where to find it on Salesforce Flow clientId Connected App, Consumer Key All flows clientSecret Connected App, Consumer Secret Username-Password, Refresh Token userName Salesforce user username Username-Password, JWT Bearer Token password Salesforce user password Username-Password refreshToken From OAuth flow callback Refresh Token keystore Connected App, Digital Certificate JWT Bearer Token The component auto determines what flow you're trying to configure, to be remove ambiguity set the authenticationType property. Note Using Username-Password Flow in production is not encouraged. Note The certificate used in JWT Bearer Token Flow can be a selfsigned certificate. The KeyStore holding the certificate and the private key must contain only single certificate-private key entry. 111.6. URI format When used as a consumer, receiving streaming events, the URI scheme is: When used as a producer, invoking the Salesforce REST APIs, the URI scheme is: 111.7. Passing in Salesforce headers and fetching Salesforce response headers There is support to pass Salesforce headers via inbound message headers, header names that start with Sforce or x-sfdc on the Camel message will be passed on in the request, and response headers that start with Sforce will be present in the outbound message headers. For example to fetch API limits you can specify: // in your Camel route set the header before Salesforce endpoint //... .setHeader("Sforce-Limit-Info", constant("api-usage")) .to("salesforce:getGlobalObjects") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); } } In addition, HTTP response status code and text are available as headers Exchange.HTTP_RESPONSE_CODE and Exchange.HTTP_RESPONSE_TEXT . 111.8. Supported Salesforce APIs The component supports the following Salesforce APIs Producer endpoints can use the following APIs. Most of the APIs process one record at a time, the Query API can retrieve multiple Records. 111.8.1. Rest API You can use the following for operationName : getVersions - Gets supported Salesforce REST API versions getResources - Gets available Salesforce REST Resource endpoints getGlobalObjects - Gets metadata for all available SObject types getBasicInfo - Gets basic metadata for a specific SObject type getDescription - Gets comprehensive metadata for a specific SObject type getSObject - Gets an SObject using its Salesforce Id createSObject - Creates an SObject updateSObject - Updates an SObject using Id deleteSObject - Deletes an SObject using Id getSObjectWithId - Gets an SObject using an external (user defined) id field upsertSObject - Updates or inserts an SObject using an external id deleteSObjectWithId - Deletes an SObject using an external id query - Runs a Salesforce SOQL query queryMore - Retrieves more results (in case of large number of results) using result link returned from the 'query' API search - Runs a Salesforce SOSL query limits - fetching organization API usage limits recent - fetching recent items approval - submit a record or records (batch) for approval process approvals - fetch a list of all approval processes composite - submit up to 25 possibly related REST requests and receive individual responses. It's also possible to use "raw" composite without limitation. composite-tree - create up to 200 records with parent-child relationships (up to 5 levels) in one go composite-batch - submit a composition of requests in batch compositeRetrieveSObjectCollections - Retrieve one or more records of the same object type. compositeCreateSObjectCollections - Add up to 200 records, returning a list of SaveSObjectResult objects. compositeUpdateSObjectCollections - Update up to 200 records, returning a list of SaveSObjectResult objects. compositeUpsertSObjectCollections - Create or update (upsert) up to 200 records based on an external ID field. Returns a list of UpsertSObjectResult objects. compositeDeleteSObjectCollections - Delete up to 200 records, returning a list of SaveSObjectResult objects. queryAll - Runs a SOQL query. It returns the results that are deleted because of a merge (merges up to three records into one of the records, deletes the others, and reparents any related records) or delete. Also returns the information about archived Task and Event records. getBlobField - Retrieves the specified blob field from an individual record. apexCall - Executes a user defined APEX REST API call. raw - Send requests to salesforce and have full, raw control over endpoint, parameters, body, etc. For example, the following producer endpoint uses the upsertSObject API, with the sObjectIdName parameter specifying 'Name' as the external id field. The request message body should be an SObject DTO generated using the maven plugin. The response message will either be null if an existing record was updated, or CreateSObjectResult with an id of the new record, or a list of errors while creating the new object. ...to("salesforce:upsertSObject?sObjectIdName=Name")... 111.8.2. Bulk 2.0 API The Bulk 2.0 API has a simplified model over the original Bulk API. Use it to quickly load a large amount of data into salesforce, or query a large amount of data out of salesforce. Data must be provided in CSV format. The minimum API version for Bulk 2.0 is v41.0. The minimum API version for Bulk Queries is v47.0. DTO classes mentioned below are from the org.apache.camel.component.salesforce.api.dto.bulkv2 package. The following operations are supported: bulk2CreateJob - Create a bulk job. Supply an instance of Job in the message body. bulk2GetJob - Get an existing Job. jobId parameter is required. bulk2CreateBatch - Add a Batch of CSV records to a job. Supply CSV data in the message body. The first row must contain headers. jobId parameter is required. bulk2CloseJob - Close a job. You must close the job in order for it to be processed or aborted/deleted. jobId parameter is required. bulk2AbortJob - Abort a job. jobId parameter is required. bulk2DeleteJob - Delete a job. jobId parameter is required. bulk2GetSuccessfulResults - Get successful results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetFailedResults - Get failed results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetUnprocessedRecords - Get unprocessed records for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetAllJobs - Get all jobs. Response body is an instance of Jobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. bulk2CreateQueryJob - Create a bulk query job. Supply an instance of QueryJob in the message body. bulk2GetQueryJob - Get a bulk query job. jobId parameter is required. bulk2GetQueryJobResults - Get bulk query job results. jobId parameter is required. Accepts maxRecords and locator parameters. Response message headers include Sforce-NumberOfRecords and Sforce-Locator headers. The value of Sforce-Locator can be passed into subsequent calls via the locator parameter. bulk2AbortQueryJob - Abort a bulk query job. jobId parameter is required. bulk2DeleteQueryJob - Delete a bulk query job. jobId parameter is required. bulk2GetAllQueryJobs - Get all jobs. Response body is an instance of QueryJobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. 111.8.3. Rest Bulk (original) API Producer endpoints can use the following APIs. All Job data formats, i.e. xml, csv, zip/xml, and zip/csv are supported. The request and response have to be marshalled/unmarshalled by the route. Usually the request will be some stream source like a CSV file, and the response may also be saved to a file to be correlated with the request. You can use the following for operationName : createJob - Creates a Salesforce Bulk Job. Must supply a JobInfo instance in body. PK Chunking is supported via the pkChunking* options. See an explanation here . getJob - Gets a Job using its Salesforce Id closeJob - Closes a Job abortJob - Aborts a Job createBatch - Submits a Batch within a Bulk Job getBatch - Gets a Batch using Id getAllBatches - Gets all Batches for a Bulk Job Id getRequest - Gets Request data (XML/CSV) for a Batch getResults - Gets the results of the Batch when its complete createBatchQuery - Creates a Batch from an SOQL query getQueryResultIds - Gets a list of Result Ids for a Batch Query getQueryResult - Gets results for a Result Id getRecentReports - Gets up to 200 of the reports you most recently viewed by sending a GET request to the Report List resource. getReportDescription - Retrieves the report, report type, and related metadata for a report, either in a tabular or summary or matrix format. executeSyncReport - Runs a report synchronously with or without changing filters and returns the latest summary data. executeAsyncReport - Runs an instance of a report asynchronously with or without filters and returns the summary data with or without details. getReportInstances - Returns a list of instances for a report that you requested to be run asynchronously. Each item in the list is treated as a separate instance of the report. getReportResults : Contains the results of running a report. For example, the following producer endpoint uses the createBatch API to create a Job Batch. The in message must contain a body that can be converted into an InputStream (usually UTF-8 CSV or XML content from a file, etc.) and header fields 'jobId' for the Job and 'contentType' for the Job content type, which can be XML, CSV, ZIP_XML or ZIP_CSV. The put message body will contain BatchInfo on success, or throw a SalesforceException on error. ...to("salesforce:createBatch").. 111.8.4. Rest Streaming API Consumer endpoints can use the following syntax for streaming endpoints to receive Salesforce notifications on create/update. To create and subscribe to a topic from("salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")... To subscribe to an existing topic from("salesforce:CamelTestTopic&sObjectName=Merchandise__c")... 111.8.5. Platform events To emit a platform event use createSObject operation. And set the message body can be JSON string or InputStream with key-value data - in that case sObjectName needs to be set to the API name of the event, or a class that extends from AbstractDTOBase with the appropriate class name for the event. For example using a DTO: class Order_Event__e extends AbstractDTOBase { @JsonProperty("OrderNumber") private String orderNumber; // ... other properties and getters/setters } from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to("salesforce:createSObject"); Or using JSON event data: from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody("{\"OrderNumber\":\"" + orderNumber + "\"}"); }) .to("salesforce:createSObject?sObjectName=Order_Event__e"); To receive platform events use the consumer endpoint with the API name of the platform event prefixed with event/ (or /event/ ), e.g.: salesforce:events/Order_Event__e . Processor consuming from that endpoint will receive either org.apache.camel.component.salesforce.api.dto.PlatformEvent object or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. For example, in the simplest form to consume one event: PlatformEvent event = consumer.receiveBody("salesforce:event/Order_Event__e", PlatformEvent.class); 111.8.6. Change data capture events On the one hand, Salesforce could be configured to emit notifications for record changes of select objects. On the other hand, the Camel Salesforce component could react to such notifications, allowing for instance to synchronize those changes into an external system . The notifications of interest could be specified in the from("salesforce:XXX") clause of a Camel route via the subscription channel, e.g: from("salesforce:data/ChangeEvents?replayId=-1").log("being notified of all change events") from("salesforce:data/AccountChangeEvent?replayId=-1").log("being notified of change events for Account records") from("salesforce:data/Employee__ChangeEvent?replayId=-1").log("being notified of change events for Employee__c custom object") The received message contains either java.util.Map<String,Object> or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. The CamelSalesforceChangeType header could be valued to one of CREATE , UPDATE , DELETE or UNDELETE . More details about how to use the Camel Salesforce component change data capture capabilities could be found in the ChangeEventsConsumerIntegrationTest . The Salesforce developer guide is a good fit to better know the subtleties of implementing a change data capture integration application. The dynamic nature of change event body fields, high level replication steps as well as security considerations could be of interest. 111.9. Examples 111.9.1. Uploading a document to a ContentWorkspace Create the ContentVersion in Java, using a Processor instance: public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle("test document"); cv.setPathOnClient("test_doc.html"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } } Give the output from the processor to the Salesforce component: from("file:///home/camel/library") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to("salesforce:createSObject"); 111.10. Using Salesforce Limits API With salesforce:limits operation you can fetch of API limits from Salesforce and then act upon that data received. The result of salesforce:limits operation is mapped to org.apache.camel.component.salesforce.api.dto.Limits class and can be used in a custom processors or expressions. For instance, consider that you need to limit the API usage of Salesforce so that 10% of daily API requests is left for other routes. The body of output message contains an instance of org.apache.camel.component.salesforce.api.dto.Limits object that can be used in conjunction with Content Based Router and Content Based Router and Spring Expression Language (SpEL) to choose when to perform queries. Notice how multiplying 1.0 with the integer value held in body.dailyApiRequests.remaining makes the expression evaluate as with floating point arithmetic, without it - it would end up making integral division which would result with either 0 (some API limits consumed) or 1 (no API limits consumed). from("direct:querySalesforce") .to("salesforce:limits") .choice() .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) .to("salesforce:query?...") .otherwise() .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) .endChoice() 111.11. Working with approvals All the properties are named exactly the same as in the Salesforce REST API prefixed with approval. . You can set approval properties by setting approval.PropertyName of the Endpoint these will be used as template - meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval property to a reference onto a bean in the Registry. You can also provide header values using the same approval.PropertyName in the incoming message headers. And finally body can contain one AprovalRequest or an Iterable of ApprovalRequest objects to process as a batch. The important thing to remember is the priority of the values specified in these three mechanisms: value in body takes precedence before any other value in message header takes precedence before template value value in template is set if no other value in header or body was given For example to send one record for approval using values in headers use: Given a route: from("direct:example1")// .setHeader("approval.ContextId", simple("USD{body['contextId']}")) .setHeader("approval.NextApproverIds", simple("USD{body['nextApproverIds']}")) .to("salesforce:approval?"// + "approval.actionType=Submit"// + "&approval.comments=this is a test"// + "&approval.processDefinitionNameOrId=Test_Account_Process"// + "&approval.skipEntryCriteria=true"); You could send a record for approval using: final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); 111.12. Using Salesforce Recent Items API To fetch the recent items use salesforce:recent operation. This operation returns an java.util.List of org.apache.camel.component.salesforce.api.dto.RecentItem objects ( List<RecentItem> ) that in turn contain the Id , Name and Attributes (with type and url properties). You can limit the number of returned items by specifying limit parameter set to maximum number of records to return. For example: from("direct:fetchRecentItems") to("salesforce:recent") .split().body() .log("USD{body.name} at USD{body.attributes.url}"); 111.13. Using Salesforce Composite API to submit SObject tree To create up to 200 records including parent-child relationships use salesforce:composite-tree operation. This requires an instance of org.apache.camel.component.salesforce.api.dto.composite.SObjectTree in the input message and returns the same tree of objects in the output message. The org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase instances within the tree get updated with the identifier values ( Id property) or their corresponding org.apache.camel.component.salesforce.api.dto.composite.SObjectNode is populated with errors on failure. Note that for some records operation can succeed and for some it can fail - so you need to manually check for errors. Easiest way to use this functionality is to use the DTOs generated by the camel-salesforce-maven-plugin , but you also have the option of customizing the references that identify the each object in the tree, for instance primary keys from your database. Lets look at an example: Account account = ... Contact president = ... Contact marketing = ... Account anotherAccount = ... Contact sales = ... Asset someAsset = ... // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId(); 111.14. Using Salesforce Composite API to submit multiple requests in a batch The Composite API batch operation ( composite-batch ) allows you to accumulate multiple requests in a batch and then submit them in one go, saving the round trip cost of multiple individual requests. Each response is then received in a list of responses with the order preserved, so that the n-th requests response is in the n-th place of the response. Note The results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format and hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: final String acountId = ... final SObjectBatch batch = new SObjectBatch("38.0"); final Account updates = new Account(); updates.setName("NewName"); batch.addUpdate("Account", accountId, updates); final Account newAccount = new Account(); newAccount.setName("Account created from Composite batch API"); batch.addCreate(newAccount); batch.addGet("Account", accountId, "Name", "BillingPostalCode"); batch.addDelete("Account", accountId); final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings("unchecked") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings("unchecked") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null 111.15. Using Salesforce Composite API to submit multiple chained requests The composite operation allows submitting up to 25 requests that can be chained together, for instance identifier generated in request can be used in subsequent request. Individual requests and responses are linked with the provided reference . Note Composite API supports only JSON payloads. Note As with the batch API the results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: SObjectComposite composite = new SObjectComposite("38.0", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName("Salesforce"); updateAccount.setBillingStreet("Landmark @ 1 Market Street"); updateAccount.setBillingCity("San Francisco"); updateAccount.setBillingState("California"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); final Contact newContact = new TestContact(); newContact.setLastName("John Doe"); newContact.setPhone("1234567890"); composite.addCreate(newContact, "NewContact"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c("001xx000003DIpcAAG"); junction.setContactId__c("@{NewContact.id}"); composite.addCreate(junction, "JunctionRecord"); final SObjectCompositeResponse response = template.requestBody("salesforce:composite", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get() 111.16. Using "raw" Salesforce composite It's possible to directly call Salesforce composite by preparing the Salesforce JSON request in the route thanks to the rawPayload option. For instance, you can have the following route: The route directly creates the body as JSON and directly submit to salesforce endpoint using rawPayload=true option. With this approach, you have the complete control on the Salesforce request. POST is the default HTTP method used to send raw Composite requests to salesforce. Use the compositeMethod option to override to the other supported value, GET , which returns a list of other available composite resources. 111.17. Using Raw Operation Send HTTP requests to salesforce with full, raw control of all aspects of the call. Any serialization or deserialization of request and response bodies must be performed in the route. The Content-Type HTTP header will be automatically set based on the format option, but this can be overridden with the rawHttpHeaders option. Parameter Type Description Default Required request body String or InputStream Body of the HTTP request rawPath String The portion of the endpoint URL after the domain name, e.g., '/services/data/v51.0/sobjects/Account/' x rawMethod String The HTTP method x rawQueryParameters String Comma separated list of message headers to include as query parameters. Do not url-encode values as this will be done automatically. rawHttpHeaders String Comma separated list of message headers to include as HTTP headers 111.17.1. Query example In this example we'll send a query to the REST API. The query must be passed in a URL parameter called "q", so we'll create a message header called q and tell the raw operation to include that message header as a URL parameter: 111.17.2. SObject example In this example, we'll pass a Contact the REST API in a create operation. Since the raw operation does not perform any serialization, we make sure to pass XML in the message body The response is: 111.18. Using Composite SObject Collections The SObject Collections API executes actions on multiple records in one request. Use sObject Collections to reduce the number of round-trips between the client and server. The entire request counts as a single call toward your API limits. This resource is available in API version 42.0 and later. SObject records (aka DTOs) supplied to these operations must be instances of subclasses of AbstractDescribedSObjectBase . See the Maven Plugin section for information on generating these DTO classes. These operations serialize supplied DTOs to JSON. 111.18.1. compositeRetrieveSObjectCollections Retrieve one or more records of the same object type. Parameter Type Description Default Required ids List of String or comma-separated string A list of one or more IDs of the objects to return. All IDs must belong to the same object type. x fields List of String or comma-separated string A list of fields to include in the response. The field names you specify must be valid, and you must have read-level permissions to each field. x sObjectName String Type of SObject, e.g. Account x sObjectClass String Fully-qualified class name of DTO class to use for deserializing the response. Required if sObjectName parameter does not resolve to a class that exists in the package specified by the package option. 111.18.2. compositeCreateSObjectCollections Add up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to create x allOrNone boolean Indicates whether to roll back the entire request when the creation of any object fails (true) or to continue with the independent creation of other objects in the request. false 111.18.3. compositeUpdateSObjectCollections Update up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to update x allOrNone boolean Indicates whether to roll back the entire request when the update of any object fails (true) or to continue with the independent update of other objects in the request. false 111.18.4. compositeUpsertSObjectCollections Create or update (upsert) up to 200 records based on an external ID field, returning a list of UpsertSObjectResult objects. Mixed SObject types is not supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to upsert x allOrNone boolean Indicates whether to roll back the entire request when the upsert of any object fails (true) or to continue with the independent upsert of other objects in the request. false sObjectName String Type of SObject, e.g. Account x sObjectIdName String Name of External ID field x 111.18.5. compositeDeleteSObjectCollections Delete up to 200 records, returning a list of DeleteSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required sObjectIds or request body List of String or comma-separated string A list of up to 200 IDs of objects to be deleted. x allOrNone boolean Indicates whether to roll back the entire request when the deletion of any object fails (true) or to continue with the independent deletion of other objects in the request. false 111.19. Sending null values to salesforce By default, SObject fields with null values are not sent to salesforce. In order to send null values to salesforce, use the fieldsToNull property, as follows: accountSObject.getFieldsToNull().add("Site"); 111.20. Generating SOQL query strings org.apache.camel.component.salesforce.api.utils.QueryHelper contains helper methods to generate SOQL queries. For instance to fetch all custom fields from Account SObject you can simply generate the SOQL SELECT by invoking: String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom); 111.21. Camel Salesforce Maven Plugin This Maven plugin generates DTOs for the Camel. For obvious security reasons it is recommended that the clientId, clientSecret, userName and password fields be not set in the pom.xml. The plugin should be configured for the rest of the properties, and can be executed using the following command: The generated DTOs use Jackson annotations. All Salesforce field types are supported. Date and time fields are mapped to java.time.ZonedDateTime by default, and picklist fields are mapped to generated Java Enumerations. Please refer to README.md for details on how to generate the DTO. 111.22. Spring Boot Auto-Configuration The component supports 91 options, which are listed below. Name Description Default Type camel.component.salesforce.all-or-none Composite API option to indicate to rollback all records if any are not successful. false Boolean camel.component.salesforce.apex-method APEX method name. String camel.component.salesforce.apex-query-params Query params for APEX method. Map camel.component.salesforce.apex-url APEX method URL. String camel.component.salesforce.api-version Salesforce API version. 53.0 String camel.component.salesforce.authentication-type Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. AuthenticationType camel.component.salesforce.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.salesforce.backoff-increment Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 1000 Long camel.component.salesforce.batch-id Bulk API Batch ID. String camel.component.salesforce.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.salesforce.client-id OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String camel.component.salesforce.client-secret OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String camel.component.salesforce.composite-method Composite (raw) method. String camel.component.salesforce.config Global endpoint configuration - use to set values that are common to all endpoints. The option is a org.apache.camel.component.salesforce.SalesforceEndpointConfig type. SalesforceEndpointConfig camel.component.salesforce.content-type Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. ContentType camel.component.salesforce.default-replay-id Default replayId setting if no value is found in initialReplayIdMap. -1 Long camel.component.salesforce.enabled Whether to enable auto configuration of the salesforce component. This is enabled by default. Boolean camel.component.salesforce.fall-back-replay-id ReplayId to fall back to after an Invalid Replay Id response. -1 Long camel.component.salesforce.format Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. PayloadFormat camel.component.salesforce.http-client Custom Jetty Http Client to use to connect to Salesforce. The option is a org.apache.camel.component.salesforce.SalesforceHttpClient type. SalesforceHttpClient camel.component.salesforce.http-client-connection-timeout Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 Long camel.component.salesforce.http-client-idle-timeout Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 Long camel.component.salesforce.http-client-properties Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map camel.component.salesforce.http-max-content-length Max content length of an HTTP response. Integer camel.component.salesforce.http-proxy-auth-uri Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String camel.component.salesforce.http-proxy-excluded-addresses A list of addresses for which HTTP proxy server should not be used. Set camel.component.salesforce.http-proxy-host Hostname of the HTTP proxy server to use. String camel.component.salesforce.http-proxy-included-addresses A list of addresses for which HTTP proxy server should be used. Set camel.component.salesforce.http-proxy-password Password to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-proxy-port Port number of the HTTP proxy server to use. Integer camel.component.salesforce.http-proxy-realm Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String camel.component.salesforce.http-proxy-secure If set to false disables the use of TLS when accessing the HTTP proxy. true Boolean camel.component.salesforce.http-proxy-socks4 If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false Boolean camel.component.salesforce.http-proxy-use-digest-auth If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false Boolean camel.component.salesforce.http-proxy-username Username to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-request-buffer-size HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer camel.component.salesforce.include-details Include details in Salesforce1 Analytics report, defaults to false. Boolean camel.component.salesforce.initial-replay-id-map Replay IDs to start from per channel name. Map camel.component.salesforce.instance-id Salesforce1 Analytics report execution instance ID. String camel.component.salesforce.instance-url URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String camel.component.salesforce.job-id Bulk API Job ID. String camel.component.salesforce.jwt-audience Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String camel.component.salesforce.keystore KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. The option is a org.apache.camel.support.jsse.KeyStoreParameters type. KeyStoreParameters camel.component.salesforce.lazy-login If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false Boolean camel.component.salesforce.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.salesforce.limit Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer camel.component.salesforce.locator Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String camel.component.salesforce.login-config All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. The option is a org.apache.camel.component.salesforce.SalesforceLoginConfig type. SalesforceLoginConfig camel.component.salesforce.login-url URL of the Salesforce instance used for authentication, by default set to . String camel.component.salesforce.long-polling-transport-properties Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map camel.component.salesforce.max-backoff Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 30000 Long camel.component.salesforce.max-records The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer camel.component.salesforce.not-found-behaviour Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. NotFoundBehaviour camel.component.salesforce.notify-for-fields Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. NotifyForFieldsEnum camel.component.salesforce.notify-for-operation-create Notify for create operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-delete Notify for delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-undelete Notify for un-delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-update Notify for update operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operations Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). NotifyForOperationsEnum camel.component.salesforce.object-mapper Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. The option is a com.fasterxml.jackson.databind.ObjectMapper type. ObjectMapper camel.component.salesforce.packages In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String camel.component.salesforce.password Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String camel.component.salesforce.pk-chunking Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean camel.component.salesforce.pk-chunking-chunk-size Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer camel.component.salesforce.pk-chunking-parent Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String camel.component.salesforce.pk-chunking-start-row Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String camel.component.salesforce.query-locator Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String camel.component.salesforce.raw-http-headers Comma separated list of message headers to include as HTTP parameters for Raw operation. String camel.component.salesforce.raw-method HTTP method to use for the Raw operation. String camel.component.salesforce.raw-path The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String camel.component.salesforce.raw-payload Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false Boolean camel.component.salesforce.raw-query-parameters Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String camel.component.salesforce.refresh-token Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String camel.component.salesforce.report-id Salesforce1 Analytics report Id. String camel.component.salesforce.report-metadata Salesforce1 Analytics report metadata for filtering. The option is a org.apache.camel.component.salesforce.api.dto.analytics.reports.ReportMetadata type. ReportMetadata camel.component.salesforce.result-id Bulk API Result ID. String camel.component.salesforce.s-object-blob-field-name SObject blob field name. String camel.component.salesforce.s-object-class Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String camel.component.salesforce.s-object-fields SObject fields to retrieve. String camel.component.salesforce.s-object-id SObject ID if required by API. String camel.component.salesforce.s-object-id-name SObject external ID field name. String camel.component.salesforce.s-object-id-value SObject external ID field value. String camel.component.salesforce.s-object-name SObject name if required or supported by API. String camel.component.salesforce.s-object-query Salesforce SOQL query string. String camel.component.salesforce.s-object-search Salesforce SOSL search string. String camel.component.salesforce.ssl-context-parameters SSL parameters to use, see SSLContextParameters class for all available options. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.salesforce.update-topic Whether to update an existing Push Topic when using the Streaming API, defaults to false. false Boolean camel.component.salesforce.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.salesforce.user-name Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String camel.component.salesforce.worker-pool-max-size Maximum size of the thread pool used to handle HTTP responses. 20 Integer camel.component.salesforce.worker-pool-size Size of the thread pool used to handle HTTP responses. 10 Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency>",
"<plugin> <groupId>org.apache.camel.maven</groupId> <artifactId>camel-salesforce-maven-plugin</artifactId> <version>USD{camel-community.version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <clientId>USD{camelSalesforce.clientId}</clientId> <clientSecret>USD{camelSalesforce.clientSecret}</clientSecret> <userName>USD{camelSalesforce.userName}</userName> <password>USD{camelSalesforce.password}</password> <sslContextParameters> <secureSocketProtocol>TLSv1.2</secureSocketProtocol> </sslContextParameters> <includes> <include>Contact</include> </includes> </configuration> </execution> </executions> </plugin>",
"salesforce:operationName:topicName",
"salesforce:topic?options",
"salesforce:operationName?options",
"// in your Camel route set the header before Salesforce endpoint // .setHeader(\"Sforce-Limit-Info\", constant(\"api-usage\")) .to(\"salesforce:getGlobalObjects\") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader(\"Sforce-Limit-Info\", String.class); } }",
"...to(\"salesforce:upsertSObject?sObjectIdName=Name\")",
"...to(\"salesforce:createBatch\")..",
"from(\"salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c\")",
"from(\"salesforce:CamelTestTopic&sObjectName=Merchandise__c\")",
"class Order_Event__e extends AbstractDTOBase { @JsonProperty(\"OrderNumber\") private String orderNumber; // ... other properties and getters/setters } from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to(\"salesforce:createSObject\");",
"from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody(\"{\\\"OrderNumber\\\":\\\"\" + orderNumber + \"\\\"}\"); }) .to(\"salesforce:createSObject?sObjectName=Order_Event__e\");",
"PlatformEvent event = consumer.receiveBody(\"salesforce:event/Order_Event__e\", PlatformEvent.class);",
"from(\"salesforce:data/ChangeEvents?replayId=-1\").log(\"being notified of all change events\") from(\"salesforce:data/AccountChangeEvent?replayId=-1\").log(\"being notified of change events for Account records\") from(\"salesforce:data/Employee__ChangeEvent?replayId=-1\").log(\"being notified of change events for Employee__c custom object\")",
"public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle(\"test document\"); cv.setPathOnClient(\"test_doc.html\"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } }",
"from(\"file:///home/camel/library\") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to(\"salesforce:createSObject\");",
"from(\"direct:querySalesforce\") .to(\"salesforce:limits\") .choice() .when(spel(\"#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}\")) .to(\"salesforce:query?...\") .otherwise() .setBody(constant(\"Used up Salesforce API limits, leaving 10% for critical routes\")) .endChoice()",
"from(\"direct:example1\")// .setHeader(\"approval.ContextId\", simple(\"USD{body['contextId']}\")) .setHeader(\"approval.NextApproverIds\", simple(\"USD{body['nextApproverIds']}\")) .to(\"salesforce:approval?\"// + \"approval.actionType=Submit\"// + \"&approval.comments=this is a test\"// + \"&approval.processDefinitionNameOrId=Test_Account_Process\"// + \"&approval.skipEntryCriteria=true\");",
"final Map<String, String> body = new HashMap<>(); body.put(\"contextId\", accountIds.iterator().next()); body.put(\"nextApproverIds\", userId); final ApprovalResult result = template.requestBody(\"direct:example1\", body, ApprovalResult.class);",
"from(\"direct:fetchRecentItems\") to(\"salesforce:recent\") .split().body() .log(\"USD{body.name} at USD{body.attributes.url}\");",
"Account account = Contact president = Contact marketing = Account anotherAccount = Contact sales = Asset someAsset = // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody(\"salesforce:composite-tree\", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId();",
"final String acountId = final SObjectBatch batch = new SObjectBatch(\"38.0\"); final Account updates = new Account(); updates.setName(\"NewName\"); batch.addUpdate(\"Account\", accountId, updates); final Account newAccount = new Account(); newAccount.setName(\"Account created from Composite batch API\"); batch.addCreate(newAccount); batch.addGet(\"Account\", accountId, \"Name\", \"BillingPostalCode\"); batch.addDelete(\"Account\", accountId); final SObjectBatchResponse response = template.requestBody(\"salesforce:composite-batch\", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings(\"unchecked\") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get(\"id\"); // id of the new account, this is for JSON, for XML it would be createData.get(\"Result\").get(\"id\") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings(\"unchecked\") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get(\"Name\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"Name\") final String accountBillingPostalCode = retrieveData.get(\"BillingPostalCode\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"BillingPostalCode\") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null",
"SObjectComposite composite = new SObjectComposite(\"38.0\", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName(\"Salesforce\"); updateAccount.setBillingStreet(\"Landmark @ 1 Market Street\"); updateAccount.setBillingCity(\"San Francisco\"); updateAccount.setBillingState(\"California\"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate(\"Account\", \"001xx000003DIpcAAG\", updateAccount, \"UpdatedAccount\"); final Contact newContact = new TestContact(); newContact.setLastName(\"John Doe\"); newContact.setPhone(\"1234567890\"); composite.addCreate(newContact, \"NewContact\"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c(\"001xx000003DIpcAAG\"); junction.setContactId__c(\"@{NewContact.id}\"); composite.addCreate(junction, \"JunctionRecord\"); final SObjectCompositeResponse response = template.requestBody(\"salesforce:composite\", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> \"UpdatedAccount\".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> \"JunctionRecord\".equals(r.getReferenceId())).findFirst().get()",
"from(\"timer:fire?period=2000\").setBody(constant(\"{\\n\" + \" \\\"allOrNone\\\" : true,\\n\" + \" \\\"records\\\" : [ { \\n\" + \" \\\"attributes\\\" : {\\\"type\\\" : \\\"FOO\\\"},\\n\" + \" \\\"Name\\\" : \\\"123456789\\\",\\n\" + \" \\\"FOO\\\" : \\\"XXXX\\\",\\n\" + \" \\\"ACCOUNT\\\" : 2100.0\\n\" + \" \\\"ExternalID\\\" : \\\"EXTERNAL\\\"\\n\" \" }]\\n\" + \"}\") .to(\"salesforce:composite?rawPayload=true\") .log(\"USD{body}\");",
"from(\"direct:queryExample\") .setHeader(\"q\", \"SELECT Id, LastName FROM Contact\") .to(\"salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query\") // deserialize JSON results or handle in some other way",
"from(\"direct:createAContact\") .setBody(constant(\"<Contact><LastName>TestLast</LastName></Contact>\")) .to(\"salesforce:raw?format=XML&rawMethod=POST&rawPath=/services/data/v51.0/sobjects/Contact\")",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <Result> <id>0034x00000RnV6zAAF</id> <success>true</success> </Result>",
"accountSObject.getFieldsToNull().add(\"Site\");",
"String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);",
"mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-salesforce-component-starter |
8.8. NIST SCAP 1.2 Certification | 8.8. NIST SCAP 1.2 Certification As of Red Hat Enterprise Linux 6.6, OpenSCAP ( openscap ) is certified by the National Institute of Standards and Technology's (NIST) Security Content Automation Protocol (SCAP) 1.2. SCAP provides a standardized approach to maintaining the security of enterprise systems, allowing you to automatically verify the presence of patches, check system security configuration settings, and examine systems for signs of compromise. Red Hat Enterprise Linux 6.6 also includes a new package, scap-security-guide , which provides more information on how to get the best out of OpenSCAP. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-security-scap-certification |
Director Installation and Usage | Director Installation and Usage Red Hat OpenStack Platform 16.2 An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/index |
4.5. Alternate Options for Creating a Replica | 4.5. Alternate Options for Creating a Replica Much of the core configuration of the replica is identical to that of the server from which it was created, such as the realm name and directory settings. However, while the settings need to match, it is not required that a replica manage the same services as the server. This is true for major services (DNS and CAs) and for minor services (NTP and OpenSSH). The difference settings can be defined in the ipa-replica-prepare command or in the ipa-replica-install command. 4.5.1. Different DNS Settings For DNS, the ipa-replica-prepare command can be used to configure DNS settings specific to the replica, meaning its IP address and reverse zone. For example: If the server does not host any DNS services, then the replica can be set up to host DNS services for the Identity Management domain. As with installing a server, this is done with the --setup-dns option, and then settings for forward and reverse zones. For example, to configure DNS services for the replica with no forwarders and using an existing reverse zone: The DNS options are described in the ipa-replica-prepare and ipa-replica-install manpages. 4.5.2. Different CA Settings The CA configuration of the replica must echo the CA configuration of the server. If the server is configured with an integrated Dogtag Certificate System instance (regardless of whether it is a root CA or whether it is subordinate to an external CA), then the replica can either create its own integrated CA which is subordinate to the server CA or it can forgo having a CA at all, and forward all requests to the server's CA. If the replica will have its own CA, then it uses the --setup-ca option. The rest of the configuration is taken from the server's configuration. However, if the server was installed without any CA at all, then is nowhere to forward certificate opterations - including the ability to request certificates for the new replica instance. All of the certificates for the replica, as with the server, must be requested and retrieved before installing the replica and then submitted with the installation command. The only exception is the root CA certificate; this is retrieved from the server as part of the replica setup. 4.5.3. Different Services There are three support services that are installed on both servers and replicas by default: NTP, OpenSS client, and OpenSSH server. Any or all of this can be disabled on a replica. For example: | [
"ipa-replica-prepare ipareplica.example.com --ip-address=192.68.0.0 --no-reverse",
"ipa-replica-install ipareplica.example.com --setup-dns --no-forwarders --no-reverse --no-host-dns",
"ipa-replica-install ipareplica.example.com --setup-ca",
"ipa-replica-install ipareplica.example.com --dirsrv_pkcs12=/tmp/dirsrv-cert.p12 --dirsrv_pin=secret1 --http_pkcs12=/tmp/http-cert.p12 --http_pin=secret2",
"ipa-replica-install ... --no-ntp --no-ssh --no-sshd"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/alt-replica-install |
F.4. SysV Init Runlevels | F.4. SysV Init Runlevels The SysV init runlevel system provides a standard process for controlling which programs init launches or halts when initializing a runlevel. SysV init was chosen because it is easier to use and more flexible than the traditional BSD-style init process. The configuration files for SysV init are located in the /etc/rc.d/ directory. Within this directory, are the rc , rc.local , rc.sysinit , and, optionally, the rc.serial scripts as well as the following directories: The init.d/ directory contains the scripts used by the /sbin/init command when controlling services. Each of the numbered directories represent the six runlevels configured by default under Red Hat Enterprise Linux. F.4.1. Runlevels The idea behind SysV init runlevels revolves around the idea that different systems can be used in different ways. For example, a server runs more efficiently without the drag on system resources created by the X Window System. Or there may be times when a system administrator may need to operate the system at a lower runlevel to perform diagnostic tasks, like fixing disk corruption in runlevel 1. The characteristics of a given runlevel determine which services are halted and started by init . For instance, runlevel 1 (single user mode) halts any network services, while runlevel 3 starts these services. By assigning specific services to be halted or started on a given runlevel, init can quickly change the mode of the machine without the user manually stopping and starting services. The following runlevels are defined by default under Red Hat Enterprise Linux: 0 - Halt 1 - Single-user text mode 2 - Not used (user-definable) 3 - Full multi-user text mode 4 - Not used (user-definable) 5 - Full multi-user graphical mode (with an X-based login screen) 6 - Reboot In general, users operate Red Hat Enterprise Linux at runlevel 3 or runlevel 5 - both full multi-user modes. Users sometimes customize runlevels 2 and 4 to meet specific needs, since they are not used. The default runlevel for the system is listed in /etc/inittab . To find out the default runlevel for a system, look for the line similar to the following near the bottom of /etc/inittab : The default runlevel listed in this example is five, as the number after the first colon indicates. To change it, edit /etc/inittab as root. Warning Be very careful when editing /etc/inittab . Simple typos can cause the system to become unbootable. If this happens, either use a boot CD or DVD, enter single-user mode, or enter rescue mode to boot the computer and repair the file. For more information on single-user and rescue mode, refer to Chapter 36, Basic System Recovery . It is possible to change the default runlevel at boot time by modifying the arguments passed by the boot loader to the kernel. For information on changing the runlevel at boot time, refer to Section E.9, "Changing Runlevels at Boot Time" . | [
"init.d/ rc0.d/ rc1.d/ rc2.d/ rc3.d/ rc4.d/ rc5.d/ rc6.d/",
"id:5:initdefault:"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-boot-init-shutdown-sysv |
Chapter 16. Registering the Hypervisor and Virtual Machine | Chapter 16. Registering the Hypervisor and Virtual Machine Red Hat Enterprise Linux 6 and 7 require that every guest virtual machine is mapped to a specific hypervisor in order to ensure that every guest is allocated the same level of subscription service. To do this you need to install a subscription agent that automatically detects all guest Virtual Machines (VMs) on each KVM hypervisor that is installed and registered, which in turn will create a mapping file that sits on the host. This mapping file ensures that all guest VMs receive the following benefits: Subscriptions specific to virtual systems are readily available and can be applied to all of the associated guest VMs. All subscription benefits that can be inherited from the hypervisor are readily available and can be applied to all of the associated guest VMs. Note The information provided in this chapter is specific to Red Hat Enterprise Linux subscriptions only. If you also have a Red Hat Virtualization subscription, or a Red Hat Satellite subscription, you should also consult the virt-who information provided with those subscriptions. More information on Red Hat Subscription Management can also be found in the Red Hat Subscription Management Guide found on the customer portal. 16.1. Installing virt-who on the Host Physical Machine Register the KVM hypervisor Register the KVM Hypervisor by running the subscription-manager register [options] command in a terminal as the root user on the host physical machine. More options are available using the # subscription-manager register --help menu. In cases where you are using a user name and password, use the credentials that are known to the subscription manager. If this is your very first time subscribing and you do not have a user account, contact customer support. For example to register the VM as 'admin' with 'secret' as a password, you would send the following command: Install the virt-who packages Install the virt-who packages, by running the following command in a terminal as root on the host physical machine: Create a virt-who configuration file Add a configuration file in the /etc/virt-who.d/ directory. It does not matter what the name of the file is, but you should give it a name that makes sense and the file must be located in the /etc/virt-who.d/ directory. Inside that file add the following snippet and remember to save the file before closing it. Start the virt-who service Start the virt-who service by running the following command in a terminal as root on the host physical machine: Confirm virt-who service is receiving guest information At this point, the virt-who service will start collecting a list of domains from the host. Check the /var/log/rhsm/rhsm.log file on the host physical machine to confirm that the file contains a list of the guest VMs. For example: Procedure 16.1. Managing the subscription on the customer portal Subscribing the hypervisor As the virtual machines will be receiving the same subscription benefits as the hypervisor, it is important that the hypervisor has a valid subscription and that the subscription is available for the VMs to use. Login to the customer portal Login to the Red Hat customer portal https://access.redhat.com/ and click the Subscriptions button at the top of the page. Click the Systems link In the Subscriber Inventory section (towards the bottom of the page), click Systems link. Select the hypervisor On the Systems page, there is a table of all subscribed systems. Click on the name of the hypervisor (localhost.localdomain for example). In the details page that opens, click Attach a subscription and select all the subscriptions listed. Click Attach Selected . This will attach the host's physical subscription to the hypervisor so that the guests can benefit from the subscription. Subscribing the guest virtual machines - first time use This step is for those who have a new subscription and have never subscribed a guest virtual machine before. If you are adding virtual machines, skip this step. To consume the subscription assigned to the hypervisor profile on the machine running the virt-who service, auto subscribe by running the following command in a terminal, on the guest virtual machine as root. Subscribing additional guest virtual machines If you just subscribed a for the first time, skip this step. If you are adding additional virtual machines, it should be noted that running this command will not necessarily re-attach the same subscriptions to the guest virtual machine. This is because removing all subscriptions then allowing auto attach to resolve what is necessary for a given guest virtual machine may result in different subscriptions consumed than before. This may not have any effect on your system, but it is something you should be aware about. If you used a manual attachment procedure to attach the virtual machine, which is not described below, you will need to re-attach those virtual machines manually as the auto-attach will not work. Use the following command as root in a terminal to first remove the subscriptions for the old guests and then use the auto-attach to attach subscriptions to all the guests. Run these commands on the guest virtual machine. Confirm subscriptions are attached Confirm that the subscription is attached to the hypervisor by running the following command as root in a terminal on the guest virtual machine: Output similar to the following will be displayed. Pay attention to the Subscription Details. It should say 'Subscription is current'. The ID for the subscription to attach to the system is displayed here. You will need this ID if you need to attach the subscription manually. Indicates if your subscription is current. If your subscription is not current, an error message appears. One example is Guest has not been reported on any host and is using a temporary unmapped guest subscription. In this case the guest needs to be subscribed. In other cases, use the information as indicated in Section 16.5.2, "I have subscription status errors, what do I do?" . Register additional guests When you install new guest VMs on the hypervisor, you must register the new VM and use the subscription attached to the hypervisor, by running the following commands in a terminal as root on the guest virtual machine: | [
"subscription-manager register --username= admin --password= secret --auto-attach --type=hypervisor",
"yum install virt-who",
"[libvirt] type=libvirt",
"service virt-who start chkconfig virt-who on",
"2015-05-28 12:33:31,424 DEBUG: Libvirt domains found: [{'guestId': '58d59128-cfbb-4f2c-93de-230307db2ce0', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}]",
"subscription-manager attach --auto",
"subscription-manager remove --all subscription-manager attach --auto",
"subscription-manager list --consumed",
"subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Awesome OS with unlimited virtual guests Provides: Awesome OS Server Bits SKU: awesomeos-virt-unlimited Contract: 0 Account: ######### Your account number ##### Serial: ######### Your serial number ###### Pool ID: XYZ123 Provides Management: No Active: True Quantity Used: 1 Service Level: Service Type: Status Details: Subscription is current Subscription Type: Starts: 01/01/2015 Ends: 12/31/2015 System Type: Virtual",
"subscription-manager register subscription-manager attach --auto subscription-manager list --consumed"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/virt-machine-registration |
21.5. Miscellaneous Parameters | 21.5. Miscellaneous Parameters The following parameters can be defined in a parameter file but do not work in a CMS configuration file. rd.live.check Turns on testing of an ISO-based installation source; for example, when booted from an FCP-attached DVD or using inst.repo= with an ISO on local hard disk or mounted with NFS. nompath Disables support for multipath devices. proxy=[ protocol ://][ username [: password ]@] host [: port ] Specify a proxy to use with installation over HTTP, HTTPS, or FTP. inst.rescue Boot into a rescue system running from a RAM disk that can be used to fix and restore an installed system. inst.stage2= URL Specifies a path to an install.img file instead of to an installation source. Otherwise, follows the same syntax as inst.repo= . If inst.stage2 is specified, it typically takes precedence over other methods of finding install.img . However, if Anaconda finds install.img on local media, the inst.stage2 URL will be ignored. If inst.stage2 is not specified and install.img cannot be found locally, Anaconda looks to the location given by inst.repo= or method= . If only inst.stage2= is given without inst.repo= or method= , Anaconda uses whatever repos the installed system would have enabled by default for installation. Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. The HTTP, HTTPS or FTP paths are then tried sequentially until one succeeds: inst.syslog= IP/hostname [: port ] Sends log messages to a remote syslog server. The boot parameters described here are the most useful for installations and trouble shooting on IBM Z, but only a subset of those that influence the installation program. See Chapter 23, Boot Options for a more complete list of available boot parameters. | [
"inst.stage2=host1/install.img inst.stage2=host2/install.img inst.stage3=host3/install.img"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-parameter-configuration-files-other-s390 |
1.3. Certificates and Authentication | 1.3. Certificates and Authentication 1.3.1. A Certificate Identifies Someone or Something A certificate is an electronic document used to identify an individual, a server, a company, or other entity and to associate that identity with a public key. Like a driver's license or passport, a certificate provides generally recognized proof of a person's identity. Public-key cryptography uses certificates to address the problem of impersonation. To get personal ID such as a driver's license, a person has to present some other form of identification which confirms that the person is who he claims to be. Certificates work much the same way. Certificate authorities (CAs) validate identities and issue certificates. CAs can be either independent third parties or organizations running their own certificate-issuing server software, such as Certificate System. The methods used to validate an identity vary depending on the policies of a given CA for the type of certificate being requested. Before issuing a certificate, a CA must confirm the user's identity with its standard verification procedures. The certificate issued by the CA binds a particular public key to the name of the entity the certificate identifies, such as the name of an employee or a server. Certificates help prevent the use of fake public keys for impersonation. Only the public key certified by the certificate will work with the corresponding private key possessed by the entity identified by the certificate. In addition to a public key, a certificate always includes the name of the entity it identifies, an expiration date, the name of the CA that issued the certificate, and a serial number. Most importantly, a certificate always includes the digital signature of the issuing CA. The CA's digital signature allows the certificate to serve as a valid credential for users who know and trust the CA but do not know the entity identified by the certificate. For more information about the role of CAs, see Section 1.3.6, "How CA Certificates Establish Trust" . 1.3.2. Authentication Confirms an Identity Authentication is the process of confirming an identity. For network interactions, authentication involves the identification of one party by another party. There are many ways to use authentication over networks. Certificates are one of those way. Network interactions typically take place between a client, such as a web browser, and a server. Client authentication refers to the identification of a client (the person assumed to be using the software) by a server. Server authentication refers to the identification of a server (the organization assumed to be running the server at the network address) by a client. Client and server authentication are not the only forms of authentication that certificates support. For example, the digital signature on an email message, combined with the certificate that identifies the sender, can authenticate the sender of the message. Similarly, a digital signature on an HTML form, combined with a certificate that identifies the signer, can provide evidence that the person identified by that certificate agreed to the contents of the form. In addition to authentication, the digital signature in both cases ensures a degree of nonrepudiation; a digital signature makes it difficult for the signer to claim later not to have sent the email or the form. Client authentication is an essential element of network security within most intranets or extranets. There are two main forms of client authentication: Password-based authentication Almost all server software permits client authentication by requiring a recognized name and password before granting access to the server. Certificate-based authentication Client authentication based on certificates is part of the SSL/TLS protocol. The client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. The server validates the signature and confirms the validity of the certificate. 1.3.2.1. Password-Based Authentication Figure 1.4, "Using a Password to Authenticate a Client to a Server" shows the process of authenticating a user using a user name and password. This example assumes the following: The user has already trusted the server, either without authentication or on the basis of server authentication over SSL/TLS. The user has requested a resource controlled by the server. The server requires client authentication before permitting access to the requested resource. Figure 1.4. Using a Password to Authenticate a Client to a Server These are the steps in this authentication process: When the server requests authentication from the client, the client displays a dialog box requesting the user name and password for that server. The client sends the name and password across the network, either in plain text or over an encrypted SSL/TLS connection. The server looks up the name and password in its local password database and, if they match, accepts them as evidence authenticating the user's identity. The server determines whether the identified user is permitted to access the requested resource and, if so, allows the client to access it. With this arrangement, the user must supply a new password for each server accessed, and the administrator must keep track of the name and password for each user. 1.3.2.2. Certificate-Based Authentication One of the advantages of certificate-based authentication is that it can be used to replace the first three steps in authentication with a mechanism that allows the user to supply one password, which is not sent across the network, and allows the administrator to control user authentication centrally. This is called single sign-on . Figure 1.5, "Using a Certificate to Authenticate a Client to a Server" shows how client authentication works using certificates and SSL/TLS. To authenticate a user to a server, a client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. The server authenticates the user's identity based on the data in the certificate and signed data. Like Figure 1.4, "Using a Password to Authenticate a Client to a Server" , Figure 1.5, "Using a Certificate to Authenticate a Client to a Server" assumes that the user has already trusted the server and requested a resource and that the server has requested client authentication before granting access to the requested resource. Figure 1.5. Using a Certificate to Authenticate a Client to a Server Unlike the authentication process in Figure 1.4, "Using a Password to Authenticate a Client to a Server" , the authentication process in Figure 1.5, "Using a Certificate to Authenticate a Client to a Server" requires SSL/TLS. Figure 1.5, "Using a Certificate to Authenticate a Client to a Server" also assumes that the client has a valid certificate that can be used to identify the client to the server. Certificate-based authentication is preferred to password-based authentication because it is based on the user both possessing the private key and knowing the password. However, these two assumptions are true only if unauthorized personnel have not gained access to the user's machine or password, the password for the client software's private key database has been set, and the software is set up to request the password at reasonably frequent intervals. Note Neither password-based authentication nor certificate-based authentication address security issues related to physical access to individual machines or passwords. Public-key cryptography can only verify that a private key used to sign some data corresponds to the public key in a certificate. It is the user's responsibility to protect a machine's physical security and to keep the private-key password secret. These are the authentication steps shown in Figure 1.5, "Using a Certificate to Authenticate a Client to a Server" : The client software maintains a database of the private keys that correspond to the public keys published in any certificates issued for that client. The client asks for the password to this database the first time the client needs to access it during a given session, such as the first time the user attempts to access an SSL/TLS-enabled server that requires certificate-based client authentication. After entering this password once, the user does not need to enter it again for the rest of the session, even when accessing other SSL/TLS-enabled servers. The client unlocks the private-key database, retrieves the private key for the user's certificate, and uses that private key to sign data randomly-generated from input from both the client and the server. This data and the digital signature are evidence of the private key's validity. The digital signature can be created only with that private key and can be validated with the corresponding public key against the signed data, which is unique to the SSL/TLS session. The client sends both the user's certificate and the randomly-generated data across the network. The server uses the certificate and the signed data to authenticate the user's identity. The server may perform other authentication tasks, such as checking that the certificate presented by the client is stored in the user's entry in an LDAP directory. The server then evaluates whether the identified user is permitted to access the requested resource. This evaluation process can employ a variety of standard authorization mechanisms, potentially using additional information in an LDAP directory or company databases. If the result of the evaluation is positive, the server allows the client to access the requested resource. Certificates replace the authentication portion of the interaction between the client and the server. Instead of requiring a user to send passwords across the network continually, single sign-on requires the user to enter the private-key database password once, without sending it across the network. For the rest of the session, the client presents the user's certificate to authenticate the user to each new server it encounters. Existing authorization mechanisms based on the authenticated user identity are not affected. 1.3.3. Uses for Certificates The purpose of certificates is to establish trust. Their usage varies depending on the kind of trust they are used to ensure. Some kinds of certificates are used to verify the identity of the presenter; others are used to verify that an object or item has not been tampered with. 1.3.3.1. SSL/TLS The Transport Layer Security/Secure Sockets Layer (SSL/TLS) protocol governs server authentication, client authentication, and encrypted communication between servers and clients. SSL/TLS is widely used on the Internet, especially for interactions that involve exchanging confidential information such as credit card numbers. SSL/TLS requires an SSL/TLS server certificate. As part of the initial SSL/TLS handshake, the server presents its certificate to the client to authenticate the server's identity. The authentication uses public-key encryption and digital signatures to confirm that the server is the server it claims to be. Once the server has been authenticated, the client and server use symmetric-key encryption, which is very fast, to encrypt all the information exchanged for the remainder of the session and to detect any tampering. Servers may be configured to require client authentication as well as server authentication. In this case, after server authentication is successfully completed, the client must also present its certificate to the server to authenticate the client's identity before the encrypted SSL/TLS session can be established. For an overview of client authentication over SSL/TLS and how it differs from password-based authentication, see Section 1.3.2, "Authentication Confirms an Identity" . 1.3.3.2. Signed and Encrypted Email Some email programs support digitally signed and encrypted email using a widely accepted protocol known as Secure Multipurpose Internet Mail Extension (S/MIME). Using S/MIME to sign or encrypt email messages requires the sender of the message to have an S/MIME certificate. An email message that includes a digital signature provides some assurance that it was sent by the person whose name appears in the message header, thus authenticating the sender. If the digital signature cannot be validated by the email software, the user is alerted. The digital signature is unique to the message it accompanies. If the message received differs in any way from the message that was sent, even by adding or deleting a single character, the digital signature cannot be validated. Therefore, signed email also provides assurance that the email has not been tampered with. This kind of assurance is known as nonrepudiation, which makes it difficult for the sender to deny having sent the message. This is important for business communication. For information about the way digital signatures work, see Section 1.2, "Digital Signatures" . S/MIME also makes it possible to encrypt email messages, which is important for some business users. However, using encryption for email requires careful planning. If the recipient of encrypted email messages loses the private key and does not have access to a backup copy of the key, the encrypted messages can never be decrypted. 1.3.3.3. Single Sign-on Network users are frequently required to remember multiple passwords for the various services they use. For example, a user might have to type a different password to log into the network, collect email, use directory services, use the corporate calendar program, and access various servers. Multiple passwords are an ongoing headache for both users and system administrators. Users have difficulty keeping track of different passwords, tend to choose poor ones, and tend to write them down in obvious places. Administrators must keep track of a separate password database on each server and deal with potential security problems related to the fact that passwords are sent over the network routinely and frequently. Solving this problem requires some way for a user to log in once, using a single password, and get authenticated access to all network resources that user is authorized to use-without sending any passwords over the network. This capability is known as single sign-on. Both client SSL/TLS certificates and S/MIME certificates can play a significant role in a comprehensive single sign-on solution. For example, one form of single sign-on supported by Red Hat products relies on SSL/TLS client authentication. A user can log in once, using a single password to the local client's private-key database, and get authenticated access to all SSL/TLS-enabled servers that user is authorized to use-without sending any passwords over the network. This approach simplifies access for users, because they do not need to enter passwords for each new server. It also simplifies network management, since administrators can control access by controlling lists of certificate authorities (CAs) rather than much longer lists of users and passwords. In addition to using certificates, a complete single-sign on solution must address the need to interoperate with enterprise systems, such as the underlying operating system, that rely on passwords or other forms of authentication. 1.3.3.4. Object Signing Many software technologies support a set of tools called object signing . Object signing uses standard techniques of public-key cryptography to let users get reliable information about code they download in much the same way they can get reliable information about shrink-wrapped software. Most important, object signing helps users and network administrators implement decisions about software distributed over intranets or the Internet-for example, whether to allow Java applets signed by a given entity to use specific computer capabilities on specific users' machines. The objects signed with object signing technology can be applets or other Java code, JavaScript scripts, plug-ins, or any kind of file. The signature is a digital signature. Signed objects and their signatures are typically stored in a special file called a JAR file. Software developers and others who wish to sign files using object-signing technology must first obtain an object-signing certificate. 1.3.4. Types of Certificates The Certificate System is capable of generating different types of certificates for different uses and in different formats. Planning which certificates are required and planning how to manage them, including determining what formats are needed and how to plan for renewal, are important to manage both the PKI and the Certificate System instances. This list is not exhaustive; there are certificate enrollment forms for dual-use certificates for LDAP directories, file-signing certificates, and other subsystem certificates. These forms are available through the Certificate Manager's end-entities page, at http s ://server.example.com: 8443/ca/ee/ca . When the different Certificate System subsystems are installed, the basic required certificates and keys are generated; for example, configuring the Certificate Manager generates the CA signing certificate for the self-signed root CA and the internal OCSP signing, audit signing, SSL/TLS server, and agent user certificates. During the KRA configuration, the Certificate Manager generates the storage, transport, audit signing, and agent certificates. Additional certificates can be created and installed separately. Table 1.1. Common Certificates Certificate Type Use Example Client SSL/TLS certificates Used for client authentication to servers over SSL/TLS. Typically, the identity of the client is assumed to be the same as the identity of a person, such as an employee. See Section 1.3.2.2, "Certificate-Based Authentication" for a description of the way SSL/TLS client certificates are used for client authentication. Client SSL/TLS certificates can also be used as part of single sign-on. A bank gives a customer an SSL/TLS client certificate that allows the bank's servers to identify that customer and authorize access to the customer's accounts. A company gives a new employee an SSL/TLS client certificate that allows the company's servers to identify that employee and authorize access to the company's servers. Server SSL/TLS certificates Used for server authentication to clients over SSL/TLS. Server authentication may be used without client authentication. Server authentication is required for an encrypted SSL/TLS session. For more information, see Section 1.3.3.1, "SSL/TLS" . Internet sites that engage in electronic commerce usually support certificate-based server authentication to establish an encrypted SSL/TLS session and to assure customers that they are dealing with the web site identified with the company. The encrypted SSL/TLS session ensures that personal information sent over the network, such as credit card numbers, cannot easily be intercepted. S/MIME certificates Used for signed and encrypted email. As with SSL/TLS client certificates, the identity of the client is assumed to be the same as the identity of a person, such as an employee. A single certificate may be used as both an S/MIME certificate and an SSL/TLS certificate; see Section 1.3.3.2, "Signed and Encrypted Email" . S/MIME certificates can also be used as part of single sign-on. A company deploys combined S/MIME and SSL/TLS certificates solely to authenticate employee identities, thus permitting signed email and SSL/TLS client authentication but not encrypted email. Another company issues S/MIME certificates solely to sign and encrypt email that deals with sensitive financial or legal matters. CA certificates Used to identify CAs. Client and server software use CA certificates to determine what other certificates can be trusted. For more information, see Section 1.3.6, "How CA Certificates Establish Trust" . The CA certificates stored in Mozilla Firefox determine what other certificates can be authenticated. An administrator can implement corporate security policies by controlling the CA certificates stored in each user's copy of Firefox. Object-signing certificates Used to identify signers of Java code, JavaScript scripts, or other signed files. Software companies frequently sign software distributed over the Internet to provide users with some assurance that the software is a legitimate product of that company. Using certificates and digital signatures can also make it possible for users to identify and control the kind of access downloaded software has to their computers. 1.3.4.1. CA Signing Certificates Every Certificate Manager has a CA signing certificate with a public/private key pair it uses to sign the certificates and certificate revocation lists (CRLs) it issues. This certificate is created and installed when the Certificate Manager is installed. Note For more information about CRLs, see Section 2.4.4, "Revoking Certificates and Checking Status" . The Certificate Manager's status as a root or subordinate CA is determined by whether its CA signing certificate is self-signed or is signed by another CA. Self-signed root CAs set the policies they use to issue certificates, such as the subject names, types of certificates that can be issued, and to whom certificates can be issued. A subordinate CA has a CA signing certificate signed by another CA, usually the one that is a level above in the CA hierarchy (which may or may not be a root CA). If the Certificate Manager is a subordinate CA in a CA hierarchy, the root CA's signing certificate must be imported into individual clients and servers before the Certificate Manager can be used to issue certificates to them. The CA certificate must be installed in a client if a server or user certificate issued by that CA is installed on that client. The CA certificate confirms that the server certificate can be trusted. Ideally, the certificate chain is installed. 1.3.4.2. Other Signing Certificates Other services, such as the Online Certificate Status Protocol (OCSP) responder service and CRL publishing, can use signing certificates other than the CA certificate. For example, a separate CRL signing certificate can be used to sign the revocation lists that are published by a CA instead of using the CA signing certificate. Note For more information about OCSP, see Section 2.4.4, "Revoking Certificates and Checking Status" . 1.3.4.3. SSL/TLS Server and Client Certificates Server certificates are used for secure communications, such as SSL/TLS, and other secure functions. Server certificates are used to authenticate themselves during operations and to encrypt data; client certificates authenticate the client to the server. Note CAs which have a signing certificate issued by a third-party may not be able to issue server certificates. The third-party CA may have rules in place which prohibit its subordinates from issuing server certificates. 1.3.4.4. User Certificates End user certificates are a subset of client certificates that are used to identify users to a server or system. Users can be assigned certificates to use for secure communications, such as SSL/TLS, and other functions such as encrypting email or for single sign-on. Special users, such as Certificate System agents, can be given client certificates to access special services. 1.3.4.5. Dual-Key Pairs Dual-key pairs are a set of two private and public keys, where one set is used for signing and one for encryption. These dual keys are used to create dual certificates. The dual certificate enrollment form is one of the standard forms listed in the end-entities page of the Certificate Manager. When generating dual-key pairs, set the certificate profiles to work correctly when generating separate certificates for signing and encryption. 1.3.4.6. Cross-Pair Certificates The Certificate System can issue, import, and publish cross-pair CA certificates. With cross-pair certificates, one CA signs and issues a cross-pair certificate to a second CA, and the second CA signs and issues a cross-pair certificate to the first CA. Both CAs then store or publish both certificates as a crossCertificatePair entry. Bridging certificates can be done to honor certificates issued by a CA that is not chained to the root CA. By establishing a trust between the Certificate System CA and another CA through a cross-pair CA certificate, the cross-pair certificate can be downloaded and used to trust the certificates issued by the other CA. 1.3.5. Contents of a Certificate The contents of certificates are organized according to the X.509 v3 certificate specification, which has been recommended by the International Telecommunications Union (ITU), an international standards body. Users do not usually need to be concerned about the exact contents of a certificate. However, system administrators working with certificates may need some familiarity with the information contained in them. 1.3.5.1. Certificate Data Formats Certificate requests and certificates can be created, stored, and installed in several different formats. All of these formats conform to X.509 standards. 1.3.5.1.1. Binary The following binary formats are recognized: DER-encoded certificate . This is a single binary DER-encoded certificate. PKCS #7 certificate chain . This is a PKCS #7 SignedData object. The only significant field in the SignedData object is the certificates; the signature and the contents, for example, are ignored. The PKCS #7 format allows multiple certificates to be downloaded at a single time. Netscape Certificate Sequence . This is a simpler format for downloading certificate chains in a PKCS #7 ContentInfo structure, wrapping a sequence of certificates. The value of the contentType field should be netscape-cert-sequence , while the content field has the following structure: This format allows multiple certificates to be downloaded at the same time. 1.3.5.1.2. Text Any of the binary formats can be imported in text form. The text form begins with the following line: Following this line is the certificate data, which can be in any of the binary formats described. This data should be base-64 encoded, as described by RFC 1113. The certificate information is followed by this line: 1.3.5.2. Distinguished Names An X.509 v3 certificate binds a distinguished name (DN) to a public key. A DN is a series of name-value pairs, such as uid=doe , that uniquely identify an entity. This is also called the certificate subject name . This is an example DN of an employee for Example Corp.: In this DN, uid is the user name, cn is the user's common name, o is the organization or company name, and c is the country. DNs may include a variety of other name-value pairs. They are used to identify both certificate subjects and entries in directories that support the Lightweight Directory Access Protocol (LDAP). The rules governing the construction of DNs can be complex; for comprehensive information about DNs, see A String Representation of Distinguished Names at http://www.ietf.org/rfc/rfc4514.txt . 1.3.5.3. A Typical Certificate Every X.509 certificate consists of two sections: The data section This section includes the following information: The version number of the X.509 standard supported by the certificate. The certificate's serial number. Every certificate issued by a CA has a serial number that is unique among the certificates issued by that CA. Information about the user's public key, including the algorithm used and a representation of the key itself. The DN of the CA that issued the certificate. The period during which the certificate is valid; for example, between 1:00 p.m. on November 15, 2004, and 1:00 p.m. November 15, 2022. The DN of the certificate subject, which is also called the subject name; for example, in an SSL/TLS client certificate, this is the user's DN. Optional certificate extensions , which may provide additional data used by the client or server. For example: the Netscape Certificate Type extension indicates the type of certificate, such as an SSL/TLS client certificate, an SSL/TLS server certificate, or a certificate for signing email the Subject Alternative Name (SAN) extension links a certificate to one or more host names Certificate extensions can also be used for other purposes. The signature section This section includes the following information: The cryptographic algorithm, or cipher, used by the issuing CA to create its own digital signature. The CA's digital signature, obtained by hashing all of the data in the certificate together and encrypting it with the CA's private key. Here are the data and signature sections of a certificate shown in the readable pretty-print format: Here is the same certificate in the base-64 encoded format: 1.3.6. How CA Certificates Establish Trust CAs validate identities and issue certificates. They can be either independent third parties or organizations running their own certificate-issuing server software, such as the Certificate System. Any client or server software that supports certificates maintains a collection of trusted CA certificates. These CA certificates determine which issuers of certificates the software can trust, or validate. In the simplest case, the software can validate only certificates issued by one of the CAs for which it has a certificate. It is also possible for a trusted CA certificate to be part of a chain of CA certificates, each issued by the CA above it in a certificate hierarchy. The sections that follow explains how certificate hierarchies and certificate chains determine what certificates software can trust. 1.3.6.1. CA Hierarchies In large organizations, responsibility for issuing certificates can be delegated to several different CAs. For example, the number of certificates required may be too large for a single CA to maintain; different organizational units may have different policy requirements; or a CA may need to be physically located in the same geographic area as the people to whom it is issuing certificates. These certificate-issuing responsibilities can be divided among subordinate CAs. The X.509 standard includes a model for setting up a hierarchy of CAs, shown in Figure 1.6, "Example of a Hierarchy of Certificate Authorities" . Figure 1.6. Example of a Hierarchy of Certificate Authorities The root CA is at the top of the hierarchy. The root CA's certificate is a self-signed certificate ; that is, the certificate is digitally signed by the same entity that the certificate identifies. The CAs that are directly subordinate to the root CA have CA certificates signed by the root CA. CAs under the subordinate CAs in the hierarchy have their CA certificates signed by the higher-level subordinate CAs. Organizations have a great deal of flexibility in how CA hierarchies are set up; Figure 1.6, "Example of a Hierarchy of Certificate Authorities" shows just one example. 1.3.6.2. Certificate Chains CA hierarchies are reflected in certificate chains. A certificate chain is series of certificates issued by successive CAs. Figure 1.7, "Example of a Certificate Chain" shows a certificate chain leading from a certificate that identifies an entity through two subordinate CA certificates to the CA certificate for the root CA, based on the CA hierarchy shown in Figure 1.6, "Example of a Hierarchy of Certificate Authorities" . Figure 1.7. Example of a Certificate Chain A certificate chain traces a path of certificates from a branch in the hierarchy to the root of the hierarchy. In a certificate chain, the following occur: Each certificate is followed by the certificate of its issuer. Each certificate contains the name (DN) of that certificate's issuer, which is the same as the subject name of the certificate in the chain. In Figure 1.7, "Example of a Certificate Chain" , the Engineering CA certificate contains the DN of the CA, USA CA , that issued that certificate. USA CA 's DN is also the subject name of the certificate in the chain. Each certificate is signed with the private key of its issuer. The signature can be verified with the public key in the issuer's certificate, which is the certificate in the chain. In Figure 1.7, "Example of a Certificate Chain" , the public key in the certificate for the USA CA can be used to verify the USA CA 's digital signature on the certificate for the Engineering CA . 1.3.6.3. Verifying a Certificate Chain Certificate chain verification makes sure a given certificate chain is well-formed, valid, properly signed, and trustworthy. The following description of the process covers the most important steps of forming and verifying a certificate chain, starting with the certificate being presented for authentication: The certificate validity period is checked against the current time provided by the verifier's system clock. The issuer's certificate is located. The source can be either the verifier's local certificate database on that client or server or the certificate chain provided by the subject, as with an SSL/TLS connection. The certificate signature is verified using the public key in the issuer's certificate. The host name of the service is compared against the Subject Alternative Name (SAN) extension. If the certificate has no such extension, the host name is compared against the subject's CN. The system verifies the Basic Constraint requirements for the certificate, that is, whether the certificate is a CA and how many subsidiaries it is allowed to sign. If the issuer's certificate is trusted by the verifier in the verifier's certificate database, verification stops successfully here. Otherwise, the issuer's certificate is checked to make sure it contains the appropriate subordinate CA indication in the certificate type extension, and chain verification starts over with this new certificate. Figure 1.8, "Verifying a Certificate Chain to the Root CA" presents an example of this process. Figure 1.8. Verifying a Certificate Chain to the Root CA Figure 1.8, "Verifying a Certificate Chain to the Root CA" illustrates what happens when only the root CA is included in the verifier's local database. If a certificate for one of the intermediate CAs, such as Engineering CA , is found in the verifier's local database, verification stops with that certificate, as shown in Figure 1.9, "Verifying a Certificate Chain to an Intermediate CA" . Figure 1.9. Verifying a Certificate Chain to an Intermediate CA Expired validity dates, an invalid signature, or the absence of a certificate for the issuing CA at any point in the certificate chain causes authentication to fail. Figure 1.10, "A Certificate Chain That Cannot Be Verified" shows how verification fails if neither the root CA certificate nor any of the intermediate CA certificates are included in the verifier's local database. Figure 1.10. A Certificate Chain That Cannot Be Verified 1.3.7. Certificate Status For more information on Certificate Revocation List (CRL), see Section 2.4.4.2.1, "CRLs" For more information on Online Certificate Status Protocol (OCSP), see Section 2.4.4.2.2, "OCSP Services" | [
"CertificateSequence ::= SEQUENCE OF Certificate",
"-----BEGIN CERTIFICATE-----",
"-----END CERTIFICATE-----",
"uid=doe, cn=John Doe,o=Example Corp.,c=US",
"Certificate: Data: Version: v3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: PKCS #1 MD5 With RSA Encryption Issuer: OU=Example Certificate Authority, O=Example Corp, C=US Validity: Not Before: Fri Oct 17 18:36:25 1997 Not After: Sun Oct 17 18:36:25 1999 Subject: CN=Jane Doe, OU=Finance, O=Example Corp, C=US Subject Public Key Info: Algorithm: PKCS #1 RSA Encryption Public Key: Modulus: 00:ca:fa:79:98:8f:19:f8:d7:de:e4:49:80:48:e6:2a:2a:86: ed:27:40:4d:86:b3:05:c0:01:bb:50:15:c9:de:dc:85:19:22: 43:7d:45:6d:71:4e:17:3d:f0:36:4b:5b:7f:a8:51:a3:a1:00: 98:ce:7f:47:50:2c:93:36:7c:01:6e:cb:89:06:41:72:b5:e9: 73:49:38:76:ef:b6:8f:ac:49:bb:63:0f:9b:ff:16:2a:e3:0e: 9d:3b:af:ce:9a:3e:48:65:de:96:61:d5:0a:11:2a:a2:80:b0: 7d:d8:99:cb:0c:99:34:c9:ab:25:06:a8:31:ad:8c:4b:aa:54: 91:f4:15 Public Exponent: 65537 (0x10001) Extensions: Identifier: Certificate Type Critical: no Certified Usage: TLS Client Identifier: Authority Key Identifier Critical: no Key Identifier: f2:f2:06:59:90:18:47:51:f5:89:33:5a:31:7a:e6:5c:fb:36: 26:c9 Signature: Algorithm: PKCS #1 MD5 With RSA Encryption Signature: 6d:23:af:f3:d3:b6:7a:df:90:df:cd:7e:18:6c:01:69:8e:54:65:fc:06: 30:43:34:d1:63:1f:06:7d:c3:40:a8:2a:82:c1:a4:83:2a:fb:2e:8f:fb: f0:6d:ff:75:a3:78:f7:52:47:46:62:97:1d:d9:c6:11:0a:02:a2:e0:cc: 2a:75:6c:8b:b6:9b:87:00:7d:7c:84:76:79:ba:f8:b4:d2:62:58:c3:c5: b6:c1:43:ac:63:44:42:fd:af:c8:0f:2f:38:85:6d:d6:59:e8:41:42:a5: 4a:e5:26:38:ff:32:78:a1:38:f1:ed:dc:0d:31:d1:b0:6d:67:e9:46:a8: d:c4",
"-----BEGIN CERTIFICATE----- MIICKzCCAZSgAwIBAgIBAzANBgkqhkiG9w0BAQQFADA3MQswCQYDVQQGEwJVUzER MA8GA1UEChMITmV0c2NhcGUxFTATBgNVBAsTDFN1cHJpeWEncyBDQTAeFw05NzEw MTgwMTM2MjVaFw05OTEwMTgwMTM2MjVaMEgxCzAJBgNVBAYTAlVTMREwDwYDVQQK EwhOZXRzY2FwZTENMAsGA1UECxMEUHViczEXMBUGA1UEAxMOU3Vwcml5YSBTaGV0 dHkwgZ8wDQYJKoZIhvcNAQEFBQADgY0AMIGJAoGBAMr6eZiPGfjX3uRJgEjmKiqG 7SdATYazBcABu1AVyd7chRkiQ31FbXFOGD3wNktbf6hRo6EAmM5/R1AskzZ8AW7L iQZBcrXpc0k4du+2Q6xJu2MPm/8WKuMOnTuvzpo+SGXelmHVChEqooCwfdiZywyZ NMmrJgaoMa2MS6pUkfQVAgMBAAGjNjA0MBEGCWCGSAGG+EIBAQQEAwIAgDAfBgNV HSMEGDAWgBTy8gZZkBhHUfWJM1oxeuZc+zYmyTANBgkqhkiG9w0BAQQFAAOBgQBt I6/z07Z635DfzX4XbAFpjlRl/AYwQzTSYx8GfcNAqCqCwaSDKvsuj/vwbf91o3j3 UkdGYpcd2cYRCgKi4MwqdWyLtpuHAH18hHZ5uvi00mJYw8W2wUOsY0RC/a/IDy84 hW3WWehBUqVK5SY4/zJ4oTjx7dwNMdGwbWfpRqjd1A== -----END CERTIFICATE-----"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/introduction_to_public_key_cryptography-certificates_and_authentication |
1.8. Driver Connection URL Format | 1.8. Driver Connection URL Format URLs used when establishing a connection using the driver class have the following format: Given this format, the following table describes the variable parts of the URL: Table 1.1. URL Entities Variable Name Description VDB-NAME The name of the virtual database (VDB) to which the application is connected. Important VDB names can contain version information; for example, myvdb.2 . If such a name is used in the URL, this has the same effect as supplying a version=2 connection property. Note that if the VDB name contains version information, you cannot also use the version property in the same request. mm[s] The JBoss Data Virtualization JDBC protocol. mm is the default for normal connections. mms uses SSL for encryption and is the default for the AdminAPI tools. HOSTNAME The server where JBoss Data Virtualization is installed. PORT The port on which JBoss Data Virtualization is listening for incoming JDBC connections. [prop-name=prop-value] Any number of additional name-value pairs can be supplied in the URL, separated by semi-colons. Property values must be URL encoded if they contain reserved characters, for example, ? , = , and ; . | [
"jdbc:teiid: VDB-NAME @ mm[s] :// HOSTNAME : PORT ; [prop-name=prop-value;] *"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/Driver_Connection_URL_Format1 |
Chapter 10. Removing Windows nodes | Chapter 10. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 10.1. Deleting a specific machine You can delete a specific machine. Important Do not delete a control plane machine unless your cluster uses a control plane machine set. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-<role>-<cloud_region> format. Identify the machine that you want to delete. Delete the machine by running the following command: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas. | [
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/removing-windows-nodes |
Chapter 3. Installing and configuring automation controller on Red Hat OpenShift Container Platform web console | Chapter 3. Installing and configuring automation controller on Red Hat OpenShift Container Platform web console You can use these instructions to install the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database. Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface. Note When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information. 3.1. Prerequisites You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub. For Controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured. For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object. 3.2. Installing the automation controller operator Use this procedure to install the automation controller operator. Procedure Navigate to Operators Installed Operators , then click on the Ansible Automation Platform operator. Locate the Automation controller tab, then click Create instance . You can proceed with configuring the instance using either the Form View or YAML view. 3.2.1. Creating your automation controller form-view Use this procedure to create your automation controller using the form-view. Procedure Ensure Form view is selected. It should be selected by default. Enter the name of the new controller. Optional: Add any labels necessary. Click Advanced configuration . Enter Hostname of the instance. The hostname is optional. The default hostname will be generated based upon the deployment name you have selected. Enter the Admin account username . Enter the Admin email address . From the Admin password secret drop-down menu, select the secret. From the Database configuration secret drop-down menu, select the secret. From the Old Database configuration secret drop-down menu, select the secret. From the Secret key secret drop-down menu, select the secret. From the Broadcast Websocket Secret drop-down menu, select the secret. Enter any Service Account Annotations necessary. From the PostgreSQL Container Storage Requirements drop down menu, select requests and enter "100Gi" in the storage field. Click Create . 3.2.2. Configuring your controller image pull policy Use this procedure to configure the image pull policy on your automation controller. Procedure Log in to Red Hat OpenShift Container Platform. Go to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click advanced Configuration . Under Image Pull Policy , click on the radio button to select Always Never IfNotPresent To display the option under Image Pull Secrets , click the arrow. Click + beside Add Image Pull Secret and enter a value. To display fields under the Web container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Task container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Redis container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL container resource requirements (when using a managed instance) * drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . Note Red Hat recommends using 100Gi for your storage requirements to prevent undersized databases in your production deployments. Under Replicas, enter the number of instance replicas. Under Remove used secrets on instance removal , select true or false . The default is false. Under Preload instance with data upon creation , select true or false . The default is true. 3.2.3. Configuring your controller LDAP security Use this procedure to configure LDAP security for your automation controller. Procedure If you do not have a ldap_cacert_secret , you can create one with the following command: USD oc create secret generic <resourcename>-custom-certs \ --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \ 1 1 Modify this to point to where your CA cert is stored. This will create a secret that looks like this: USD oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: awx type: Opaque 1 Automation controller looks for the data field ldap-ca.crt in the specified secret when using the ldap_cacert_secret . Under LDAP Certificate Authority Trust Bundle click the drop-down menu and select your ldap_cacert_secret . Under LDAP Password Secret , click the drop-down menu and select a secret. Under EE Images Pull Credentials Secret , click the drop-down menu and select a secret. Under Bundle Cacert Secret , click the drop-down menu and select a secret. Under Service Type , click the drop-down menu and select ClusterIP LoadBalancer NodePort 3.2.4. Configuring your automation controller operator route options The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Route . Under Route DNS host , enter a common host name that the route answers to. Under Route TLS termination mechanism , click the drop-down menu and select Edge or Passthrough . For most instances Edge should be selected. Under Route TLS credential secret , click the drop-down menu and select a secret from the list. Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider. 3.2.5. Configuring the Ingress type for your automation controller operator The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Ingress . Under Ingress annotations , enter any annotations to add to the ingress. Under Ingress TLS secret , click the drop-down menu and select a secret from the list. After you have configured your automation controller operator, click Create at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes. You can view the progress by navigating to Workloads Pods and locating the newly created instance. Verification Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running: Operator manager controllers automation controller automation hub The operator manager controllers for each of the 3 operators, include the following: automation-controller-operator-controller-manager automation-hub-operator-controller-manager resource-operator-controller-manager After deploying automation controller, you will see the addition of these pods: controller controller-postgres After deploying automation hub, you will see the addition of these pods: hub-api hub-content hub-postgres hub-redis hub-worker Note A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod. 3.3. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command. By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates. Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. Note The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.4 supports PostgreSQL 13. Procedure The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec. Create a postgres_configuration_secret .yaml file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you want to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. 5 The variable sslmode is valid for external databases only. The allowed values are: prefer , disable , allow , require , verify-ca , and verify-full . Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml When creating your AutomationController custom resource object, specify the secret on your spec, following the example below: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration 3.4. Finding and deleting PVCs A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them. Procedure List the existing PVCs in your deployment namespace: oc get pvc -n <namespace> Identify the PVC associated with your deployment by comparing the old deployment name and the PVC name. Delete the old PVC: oc delete pvc -n <namespace> <pvc-name> 3.5. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide. | [
"oc create secret generic <resourcename>-custom-certs --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \\ 1",
"oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: awx type: Opaque",
"apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque",
"oc create -f external-postgres-configuration-secret.yml",
"apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration",
"get pvc -n <namespace>",
"delete pvc -n <namespace> <pvc-name>"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-controller-operator |
Chapter 4. Deploy OpenShift Data Foundation using IBM FlashSystem | Chapter 4. Deploy OpenShift Data Foundation using IBM FlashSystem OpenShift Data Foundation can use IBM FlashSystem storage available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for IBM FlashSystem storage. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 4.2. Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage You need to create a new OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator on the OpenShift Container Platform. Prerequisites For Red Hat Enterprise Linux(R) operating system, ensure that there is iSCSI connectivity and then configure Linux multipath devices on the host. For Red Hat Enterprise Linux CoreOS or when the packages are already installed, configure Linux multipath devices on the host. Ensure to configure each worker with storage connectivity according to your storage system instructions. For the latest supported FlashSystem products and versions, see the Installing section within your Spectrum Virtualize family product documentation in IBM Documentation . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation and then click Create StorageSystem . In the Backing storage page, select the following options: Select Connect an external storage platform from the available options. Select IBM FlashSystem Storage from the Storage platform list. Click . In the Create storage class page, provide the following information: Enter a name for the storage class. When creating block storage persistent volumes, select the storage class <storage_class_name> for best performance. The storage class allows direct I/O path to the FlashSystem. Enter the following details of IBM FlashSystem connection: IP address User name Password Pool name Select thick or thin for the Volume mode . Click . In the Capacity and nodes page, provide the necessary details: Select a value for Requested capacity. The available options are 0.5 TiB , 2 TiB , and 4 TiB . The requested capacity is dynamically allocated on the infrastructure storage class. Select at least three nodes in three different zones. It is recommended to start with at least 14 CPUs and 34 GiB of RAM per node. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, provide the necessary details: To enable encryption, select Enable data encryption for block and file storage . Choose any one or both Encryption level: Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace. Provide CA Certificate, Client Certificate, and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Select Default (SDN) if you are using a single network or Custom (Multus) if you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. NOTE: If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Review and create page, review if all the details are correct: To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification Steps Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Table 4.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) ibm-storage-odf-operator ibm-storage-odf-operator-* (2 pods on any worker nodes) ibm-odf-console-* ibm-flashsystem-storage ibm-flashsystem-storage-* (1 pod on any worker node) rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI ibm-block-csi-* (1 pod on any worker node) Verifying that the OpenShift Data Foundation cluster is healthy In the Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, verify that Storage System has a green tick mark. In the Details card, verify that the cluster information is displayed. For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . Verfifying that the Multicloud Object Gateway is healthy In the Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . Verifying that IBM FlashSystem is connected and the storage cluster is ready Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external IBM FlashSystem. Verifying the StorageSystem of the storage Run the following command to verify the storageSystem of IBM FlashSystem storage cluster. Verifying the subscription of the IBM operator Run the following command to verify the subscription: Verifying the CSVs Run the following command to verify that the CSVs are in the succeeded state. Verifying the IBM operator and CSI pods Run the following command to verify the IBM operator and CSI pods: | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc get flashsystemclusters.odf.ibm.com NAME AGE PHASE CREATED AT ibm-flashsystemcluster 35s 2021-09-23T07:44:52Z",
"oc get storagesystems.odf.openshift.io NAME STORAGE-SYSTEM-KIND STORAGE-SYSTEM-NAME ibm-flashsystemcluster-storagesystem flashsystemcluster.odf.ibm.com/v1alpha1 ibm-flashsystemcluster ocs-storagecluster-storagesystem storagecluster.ocs.openshift.io/v1 ocs-storagecluster",
"oc get subscriptions.operators.coreos.com NAME PACKAGE SOURCE CHANNEL ibm-block-csi-operator-stable-certified-operators-openshift-marketplace ibm-block-csi-operator certified-operators stable ibm-storage-odf-operator ibm-storage-odf-operator odf-catalogsource stable-v1 noobaa-operator-alpha-odf-catalogsource-openshift-storage noobaa-operator odf-catalogsource alpha ocs-operator-alpha-odf-catalogsource-openshift-storage ocs-operator odf-catalogsource alpha odf-operator odf-operator odf-catalogsource alpha",
"oc get csv NAME DISPLAY VERSION REPLACES PHASE ibm-block-csi-operator.v1.6.0 Operator for IBM block storage CSI driver 1.6.0 ibm-block-csi-operator.v1.5.0 Succeeded ibm-storage-odf-operator.v0.2.1 IBM Storage ODF operator 0.2.1 Installing noobaa-operator.v5.9.0 NooBaa Operator 5.9.0 Succeeded ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Succeeded",
"oc get pods NAME READY STATUS RESTARTS AGE 5cb2b16ec2b11bf63dbe691d44a63535dc026bb5315d5075dc6c398b3c58l94 0/1 Completed 0 10m 7c806f6568f85cf10d72508261a2535c220429b54dbcf87349b9b4b9838fctg 0/1 Completed 0 8m47s c4b05566c04876677a22d39fc9c02512401d0962109610e85c8fb900d3jd7k2 0/1 Completed 0 10m c5d1376974666727b02bf25b3a4828241612186744ef417a668b4bc1759rzts 0/1 Completed 0 10m ibm-block-csi-operator-7b656d6cc8-bqnwp 1/1 Running 0 8m3s ibm-odf-console-97cb7c84c-r52dq 0/1 ContainerCreating 0 8m4s ibm-storage-odf-operator-57b8bc47df-mgkc7 1/2 ImagePullBackOff 0 94s noobaa-operator-7698579d56-x2zqs 1/1 Running 0 9m37s ocs-metrics-exporter-94b57d764-zq2g2 1/1 Running 0 9m32s ocs-operator-5d96d778f6-vxlq5 1/1 Running 0 9m33s odf-catalogsource-j7q72 1/1 Running 0 10m odf-console-8987868cd-m7v29 1/1 Running 0 9m35s odf-operator-controller-manager-5dbf785564-rwsgq 2/2 Running 0 9m35s rook-ceph-operator-68b4b976d8-dlc6w 1/1 Running 0 9m32s"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-ibm-flashsystem |
10.2. Installing the Drivers on an Installed Windows Guest Virtual Machine | 10.2. Installing the Drivers on an Installed Windows Guest Virtual Machine This procedure covers installing the virtio drivers with a virtualized CD-ROM after Windows is installed. Follow this procedure to add a CD-ROM image with virt-manager and then install the drivers. Procedure 10.1. Installing from the driver CD-ROM image with virt-manager Open virt-manager and the guest virtual machine Open virt-manager , then open the guest virtual machine from the list by double-clicking the guest name. Open the hardware window Click the lightbulb icon on the toolbar at the top of the window to view virtual hardware details. Figure 10.1. The virtual hardware details button Then click the Add Hardware button at the bottom of the new view that appears. This opens a wizard for adding the new device. Select the device type - for Red Hat Enterprise Linux 6 versions prior to 6.2 Skip this step if you are using Red Hat Enterprise Linux 6.2 or later. On Red Hat Enterprise Linux 6 versions prior to version 6.2, you must select the type of device you wish to add. In this case, select Storage from the drop-down menu. Figure 10.2. The Add new virtual hardware wizard in Red Hat Enterprise Linux 6.1 Click the Finish button to proceed. Select the ISO file Ensure that the Select managed or other existing storage radio button is selected, and browse to the virtio driver's .iso image file. The default location for the latest version of the drivers is /usr/share/virtio-win/virtio-win.iso . Change the Device type to IDE cdrom and click the Forward button to proceed. Figure 10.3. The Add new virtual hardware wizard Finish adding virtual hardware - for Red Hat Enterprise Linux 6 versions prior to 6.2 If you are using Red Hat Enterprise Linux 6.2 or later, skip this step. On Red Hat Enterprise Linux 6 versions prior to version 6.2, click on the Finish button to finish adding the virtual hardware and close the wizard. Figure 10.4. The Add new virtual hardware wizard in Red Hat Enterprise Linux 6.1 Reboot Reboot or start the virtual machine to begin using the driver disc. Virtualized IDE devices require a restart to for the virtual machine to recognize the new device. Once the CD-ROM with the drivers is attached and the virtual machine has started, proceed with Procedure 10.2, "Windows installation on a Windows 7 virtual machine" . Procedure 10.2. Windows installation on a Windows 7 virtual machine This procedure installs the drivers on a Windows 7 virtual machine as an example. Adapt the Windows installation instructions to your guest's version of Windows. Open the Computer Management window On the desktop of the Windows virtual machine, click the Windows icon at the bottom corner of the screen to open the Start menu. Right-click on Computer and select Manage from the pop-up menu. Figure 10.5. The Computer Management window Open the Device Manager Select the Device Manager from the left-most pane. This can be found under Computer Management > System Tools . Figure 10.6. The Computer Management window Start the driver update wizard View available system devices Expand System devices by clicking on the arrow to its left. Figure 10.7. Viewing available system devices in the Computer Management window Locate the appropriate device There are up to four drivers available: the balloon driver, the serial driver, the network driver, and the block driver. Balloon , the balloon driver, affects the PCI standard RAM Controller in the System devices group. vioserial , the serial driver, affects the PCI Simple Communication Controller in the System devices group. NetKVM , the network driver, affects the Network adapters group. This driver is only available if a virtio NIC is configured. Configurable parameters for this driver are documented in Appendix A, NetKVM Driver Parameters . viostor , the block driver, affects the Disk drives group. This driver is only available if a virtio disk is configured. Right-click on the device whose driver you wish to update, and select Update Driver... from the pop-up menu. This example installs the balloon driver, so right-click on PCI standard RAM Controller . Figure 10.8. The Computer Management window Open the driver update wizard From the drop-down menu, select Update Driver Software... to access the driver update wizard. Figure 10.9. Opening the driver update wizard Specify how to find the driver The first page of the driver update wizard asks how you want to search for driver software. Click on the second option, Browse my computer for driver software . Figure 10.10. The driver update wizard Select the driver to install Open a file browser Click on Browse... Figure 10.11. The driver update wizard Browse to the location of the driver A separate driver is provided for each of the various combinations of operating system and architecture. The drivers are arranged hierarchically according to their driver type, the operating system, and the architecture on which they will be installed: driver_type / os / arch / . For example, the Balloon driver for a Windows 7 operating system with an x86 (32-bit) architecture, resides in the Balloon/w7/x86 directory. Figure 10.12. The Browse for driver software pop-up window Once you have navigated to the correct location, click OK . Click to continue Figure 10.13. The Update Driver Software wizard The following screen is displayed while the driver installs: Figure 10.14. The Update Driver Software wizard Close the installer The following screen is displayed when installation is complete: Figure 10.15. The Update Driver Software wizard Click Close to close the installer. Reboot Reboot the virtual machine to complete the driver installation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/form-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Para_virtualized_drivers-Mounting_the_image_with_virt_manager |
Chapter 15. Installation configuration parameters for Azure | Chapter 15. Installation configuration parameters for Azure Before you deploy an OpenShift Container Platform cluster on Microsoft Azure, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 15.1. Available installation configuration parameters for Azure The following tables specify the required, optional, and Azure-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 15.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 15.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal , External , or Mixed . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . To deploy a cluster where the API and the ingress server have different publishing strategies, set publish to Mixed and use the operatorPublishingStrategy parameter. The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 15.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 15.4. Additional Azure parameters Parameter Description Values Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for compute machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use compute.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use compute.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use compute.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . Defines the Azure instance type for compute machines. String The availability zones where the installation program creates compute machines. String list Enables confidential VMs or trusted launch for compute nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for compute nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables confidential VMs or trusted launch for control plane nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for control plane nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Defines the Azure instance type for control plane machines. String The availability zones where the installation program creates control plane machines. String list Enables confidential VMs or trusted launch for all nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for all nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane and compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for both types of machines. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The version of the image to use. The Azure instance type for control plane and compute machines. The Azure instance type. The availability zones where the installation program creates compute and control plane machines. String list. Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for control plane machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. If you specify the NatGateway routing strategy, the installation program will only create one NAT gateway. If you specify the NatGateway routing strategy, your account must have the Microsoft.Network/natGateways/read and Microsoft.Network/natGateways/write permissions. Important NatGateway is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . LoadBalancer , UserDefinedRouting , or NatGateway . The default is LoadBalancer . The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . Specifies the name of the key vault that contains the encryption key that is used to encrypt Azure storage. String. Specifies the name of the user-managed encryption key that is used to encrypt Azure storage. String. Specifies the name of the resource group that contains the key vault and managed identity. String. Specifies the subscription ID that is associated with the key vault. String, in the format 00000000-0000-0000-0000-000000000000 . Specifies the name of the user-assigned managed identity that resides in the resource group with the key vault and has access to the user-managed key. String. Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. The name of the existing VNet that you want to deploy your cluster to. String. The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Determines whether the load balancers that service the API are public or private. Set this parameter to Internal to prevent the API server from being accessible outside of your VNet. Set this parameter to External to make the API server accessible outside of your VNet. If you set this parameter, you must set the publish parameter to Mixed . External or Internal . The default value is External . Determines whether the DNS resources that the cluster creates for ingress traffic are publicly visible. Set this parameter to Internal to prevent the ingress VIP from being publicly accessible. Set this parameter to External to make the ingress VIP publicly accessible. If you set this parameter, you must set the publish parameter to Mixed . External or Internal . The default value is External . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: encryptionAtHost:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: ultraSSDCapability:",
"compute: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"compute: platform: azure: osDisk: diskEncryptionSet: name:",
"compute: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"compute: platform: azure: osImage: publisher:",
"compute: platform: azure: osImage: offer:",
"compute: platform: azure: osImage: sku:",
"compute: platform: azure: osImage: version:",
"compute: platform: azure: vmNetworkingType:",
"compute: platform: azure: type:",
"compute: platform: azure: zones:",
"compute: platform: azure: settings: securityType:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: settings: securityType:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: type:",
"controlPlane: platform: azure: zones:",
"platform: azure: defaultMachinePlatform: settings: securityType:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: osDisk: securityProfile: securityEncryptionType:",
"platform: azure: defaultMachinePlatform: encryptionAtHost:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: name:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: resourceGroup:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: subscriptionId:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: osImage: publisher:",
"platform: azure: defaultMachinePlatform: osImage: offer:",
"platform: azure: defaultMachinePlatform: osImage: sku:",
"platform: azure: defaultMachinePlatform: osImage: version:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: defaultMachinePlatform: zones:",
"controlPlane: platform: azure: encryptionAtHost:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: name:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: osImage: publisher:",
"controlPlane: platform: azure: osImage: offer:",
"controlPlane: platform: azure: osImage: sku:",
"controlPlane: platform: azure: osImage: version:",
"controlPlane: platform: azure: ultraSSDCapability:",
"controlPlane: platform: azure: vmNetworkingType:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: region:",
"platform: azure: zone:",
"platform: azure: customerManagedKey: keyVault: name:",
"platform: azure: customerManagedKey: keyVault: keyName:",
"platform: azure: customerManagedKey: keyVault: resourceGroup:",
"platform: azure: customerManagedKey: keyVault: subscriptionId:",
"platform: azure: customerManagedKey: userAssignedIdentityKey:",
"platform: azure: defaultMachinePlatform: ultraSSDCapability:",
"platform: azure: networkResourceGroupName:",
"platform: azure: virtualNetwork:",
"platform: azure: controlPlaneSubnet:",
"platform: azure: computeSubnet:",
"platform: azure: cloudName:",
"platform: azure: defaultMachinePlatform: vmNetworkingType:",
"operatorPublishingStrategy: apiserver:",
"operatorPublishingStrategy: ingress:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure/installation-config-parameters-azure |
8.116. libproxy | 8.116. libproxy 8.116.1. RHBA-2014:1556 - libproxy bug fix update Updated libproxy packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libproxy library handles all the details of proxy configuration. It provides a stable external API, dynamic adjustment to changing network topology, small core footprint, without external dependencies within libproxy core (though libproxy plug-ins may have dependencies). Bug Fixes BZ# 802765 Previously, the libproxy utility attempted to locate the /etc/proxy.conf file from the current working directory. Consequently, the configuration file was not always found. This bug has been fixed and libproxy now locates /etc/proxy.conf as expected. BZ# 874492 A flaw was found in the way libproxy handled the downloading of proxy auto-configuration (PAC) files. Consequently, programs using libproxy terminated unexpectedly with a segmentation fault when processing PAC files that contained syntax errors. With this update, the handling of PAC files has been fixed in libproxy, thus preventing the segmentation fault. BZ# 979356 Due to a bug in the libproxy packages, the "reporter-upload" command used by Automatic Bug Reporting Tool terminated unexpectedly if given an "scp" URL that did not contain a password. This bug has been fixed, and reporter-upload no longer crashes in the aforementioned scenario. Users of libproxy are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libproxy |
5.348. virtio-win | 5.348. virtio-win 5.348.1. RHBA-2012:1083 - virtio-win bug fix update An updated virtio-win package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The virtio-win package provides paravirtualized network drivers for most Microsoft Windows operating systems. Paravirtualized drivers are virtualization-aware drivers used by fully virtualized guests running on Red Hat Enterprise Linux. Fully virtualized guests using the paravirtualized drivers gain significantly better I/O performance than fully virtualized guests running without the drivers. Bug Fixes BZ# 838523 A bug in the virtio serial driver could cause a Stop Error (also known as Blue Screen of Death, or BSoD) which occurred on a guest machine when transferring data from the host. This update fixes the bug in the driver so that the guest machine no longer crashes with Blue Screen of Death in this scenario. BZ# 838655 The QXL driver included in the version of the virtio-win package was not digitally signed. The QXL driver provided in this update in digitally signed. All users of virtio-win are advised to upgrade to this updated package, which fixes these bugs. 5.348.2. RHBA-2012:0751 - virtio-win bug fix and enhancement update An updated virtio-win package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The virtio-win package provides paravirtualized network drivers for most Microsoft Windows operating systems. Paravirtualized drivers are virtualization-aware drivers used by fully virtualized guests running on Red Hat Enterprise Linux. Fully virtualized guests using the paravirtualized drivers gain significantly better I/O performance than fully virtualized guests running without the drivers. Bug Fixes BZ# 492777 Previously, if a Microsoft Windows guest machine sent more tx fragments than 256 (the ring size), the NetKVM driver dropped packets. To prevent this problem, indirect ring support has been implemented in the NetKVM driver. BZ# 759361 Previously, user were not able to update the rx and tx parameters in Windows Registry by using the NetKVMConfig utility. Although the utility reported that the parameters had been changed, the change was not displayed in the Windows Device Manager. This was due to incorrect NetKVMConfig parameters changing handler, which has been fixed, so that NetKVMConfig now works as expected and users can update the rx and tx parameters. BZ# 753723 Previously, the block driver (viostor) did not provide support for obtaining serial numbers of virtio block devices from QEMU. The serial numbers were therefore not available on Windows guest machines. With this update, a serial number of a virtio block device is now retrieved from miniport during the find adapter phase. BZ# 752743 Prior to this update, the block driver (viostor) driver did not reject write requests to read-only volumes. Attempting to format a read-only volume caused the guest to stop with an EIO error. With this update, if the target volume has the read-only flag, the guest does not stop, and write requests are completed with an error. Attempting to format or write to a read-only volume are now rejected by the viostor driver. BZ# 751952 Previously, if the "Fix IP checksum on LSO" option in Microsoft Windows Device Manager was disabled, users were not able to transfer data from a guest machine to the host machine using the winscp utility. To prevent this problem, it is no longer possible to disable the "Fix IP checksum on LSO" option. BZ# 803950 A bug in the balloon driver could cause a stop error (also known as Blue Screen of Death, or BSoD) if a guest machine entered the S3 (suspend to RAM) or S4 (suspend to disk) state while performing memory balooning on it. The bug in the balloon driver has been fixed, and the stop error no longer occurs under these circumstances. BZ# 810694 Previously, incorrect flush requests handling could lead to a race condition in the block driver (viostor). Under heavy load, usually when using the "cache=writeback" option, the flush handler was executed asynchronously without proper synchronization with the rest of request processing logic. With this update, execution of the flush request is synchronized with the virtio Interrupt Service Routine (ISR), and the race condition no longer occurs in this scenario. BZ# 771390 The viostor utility did not check the size of an incoming buffer. Applications could send buffers larger than the maximum transfer size to the viostor driver directly by bypassing the file system stack. The buffer size is now reduced if it is bigger that the maximum transfer size. The viostor driver can now properly handle requests with buffers of any size. Enhancements BZ# 677219 Previously, it was not possible to resize non-system disks online, without reboot. This update adds support for online resizing of VirtIO non-system disks. BZ# 713643 This update provides optimized offload RX IP checksum for the virtio_net driver. BZ# 808322 Offload parameters for the virtio-win network driver have been updated. Multiple parameters are now set to "enabled" by default. To edit parameters of an installed driver, open Microsoft Windows Device Manager, choose "Red Hat VirtIO Ethernet Adapter" from the "Network Adapters" list and click on the "Advanced" tab. All users of virtio-win are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/virtio-win |
14.6. Additional Resources | 14.6. Additional Resources For more information about OpenSSH and OpenSSL, see the resources listed below. 14.6.1. Installed Documentation sshd (8) - a manual page for the sshd daemon. ssh (1) - a manual page for the ssh client. scp (1) - a manual page for the scp utility. sftp (1) - a manual page for the sftp utility. ssh-keygen (1) - a manual page for the ssh-keygen utility. ssh_config (5) - a manual page with a full description of available SSH client configuration options. sshd_config (5) - a manual page with a full description of available SSH daemon configuration options. /usr/share/doc/openssh- version / Contains detailed information on the protocols supported by OpenSSH. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-openssh-additional-resources |
Chapter 2. Installing online or offline | Chapter 2. Installing online or offline Choose the Ansible Automation Platform installer you need to install private automation hub based on your Red Hat Enterprise Linux environment internet connectivity. Review the following scenarios and determine which Ansible Automation Platform installer meets your needs. Note You must have a valid Red Hat customer account to access Ansible Automation Platform installer downloads on the Red Hat Customer Portal. Installing with internet access Install private automation hub using the Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access will retrieve the latest required repositories, packages, and dependencies. Navigate to Download Red Hat Ansible Automation Platform . Click Download Now for the Ansible Automation Platform <latest-version> Setup . Extract the files: USD tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz Installing without internet access Install private automation hub using the Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive. Navigate to Download Red Hat Ansible Automation Platform . Click Download Now for the Ansible Automation Platform <latest-version> Setup Bundle . Extract the files: USD tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz | [
"tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz",
"tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_and_upgrading_private_automation_hub/installing_online_or_offline |
Chapter 3. ImageSignature [image.openshift.io/v1] | Chapter 3. ImageSignature [image.openshift.io/v1] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 3.1.1. .conditions Description Conditions represent the latest available observations of a signature's current state. Type array 3.1.2. .conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 3.1.3. .issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 3.1.4. .issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 3.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagesignatures POST : create an ImageSignature /apis/image.openshift.io/v1/imagesignatures/{name} DELETE : delete an ImageSignature 3.2.1. /apis/image.openshift.io/v1/imagesignatures Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create an ImageSignature Table 3.2. Body parameters Parameter Type Description body ImageSignature schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ImageSignature schema 201 - Created ImageSignature schema 202 - Accepted ImageSignature schema 401 - Unauthorized Empty 3.2.2. /apis/image.openshift.io/v1/imagesignatures/{name} Table 3.4. Global path parameters Parameter Type Description name string name of the ImageSignature Table 3.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed HTTP method DELETE Description delete an ImageSignature Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status_v5 schema 202 - Accepted Status_v5 schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/imagesignature-image-openshift-io-v1 |
14.3. Setting Default ACLs | 14.3. Setting Default ACLs To set a default ACL, add d: before the rule and specify a directory instead of a file name. For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): | [
"setfacl -m d:o:rx /share"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/access_control_lists-setting_default_acls |
Chapter 3. Manually scaling a compute machine set | Chapter 3. Manually scaling a compute machine set You can add or remove an instance of a machine in a compute machine set. Note If you need to modify aspects of a compute machine set outside of scaling, see Modifying a compute machine set . 3.1. Prerequisites If you enabled the cluster-wide proxy and scale up compute machines not included in networking.machineNetwork[].cidr from the installation configuration, you must add the compute machines to the Proxy object's noProxy field to prevent connection issues. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 3.2. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io 3.3. The compute machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0 unless you first relocate the router pods. Note Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption. 3.4. Additional resources Lifecycle hooks for the machine deletion phase | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api",
"oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines.machine.openshift.io",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/manually-scaling-machineset |
Chapter 39. System and Subscription Management | Chapter 39. System and Subscription Management Non-working Back button in the Subscription Manager add-on for Initial Setup The Back button on the first panel of the Subscription Manager add-on for the Initial Setup utility does not work. To work around this problem, click Done at the top of Initial Setup to exit the registration workflow. virt-who fails to change host-to-guest association to the Candlepin server When adding, removing, or migrating a guest, the virt-who utility currently fails to send the host-to-guest mapping and prints a RateLimitExceededException error to the log file. To work around the problem, set the VIRTWHO_INTERVAL= parameter in the /etc/sysconfig/virt-who file to a large number, such as 600. This allows the mapping to be changed correctly, but causes changes in the host-to-guest mapping to take significantly longer to be processed. ReaR fails to create an ISO on IBM System z ReaR is unable to create an ISO image on IBM System z systems. To work around this problem, use a different type of rescue system than ISO. ReaR supports only grub during system recovery ReaR supports only the grub boot loader. Consequently, ReaR cannot automatically recover a system with a different boot loader. Notably, yaboot is not yet supported by ReaR on PowerPC machines. To work around this problem, edit the boot loader manually. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/known-issues-system_and_subscription_management |
16.19. Package Group Selection | 16.19. Package Group Selection Now that you have made most of the choices for your installation, you are ready to confirm the default package selection or customize packages for your system. The Package Installation Defaults screen appears and details the default package set for your Red Hat Enterprise Linux installation. This screen varies depending on the version of Red Hat Enterprise Linux you are installing. Important If you install Red Hat Enterprise Linux in text mode, you cannot make package selections. The installer automatically selects packages only from the base and core groups. These packages are sufficient to ensure that the system is operational at the end of the installation process, ready to install updates and new packages. To change the package selection, complete the installation, then use the Add/Remove Software application to make desired changes. Figure 16.47. Package Group Selection By default, the Red Hat Enterprise Linux installation process loads a selection of software that is suitable for a system deployed as a basic server. Note that this installation does not include a graphical environment. To include a selection of software suitable for other roles, click the radio button that corresponds to one of the following options: Basic Server This option provides a basic installation of Red Hat Enterprise Linux for use on a server. Database Server This option provides the MySQL and PostgreSQL databases. Web server This option provides the Apache web server. Enterprise Identity Server Base This option provides OpenLDAP and Enterprise Identity Management (IPA) to create an identity and authentication server. Virtual Host This option provides the KVM and Virtual Machine Manager tools to create a host for virtual machines. Desktop This option provides the OpenOffice.org productivity suite, graphical tools such as the GIMP , and multimedia applications. Software Development Workstation This option provides the necessary tools to compile software on your Red Hat Enterprise Linux system. Minimal This option provides only the packages essential to run Red Hat Enterprise Linux. A minimal installation provides the basis for a single-purpose server or desktop appliance and maximizes performance and security on such an installation. Warning Minimal installation currently does not configure the firewall ( iptables / ip6tables ) by default because the authconfig and system-config-firewall-base packages are missing from the selection. To work around this issue, you can use a Kickstart file to add these packages to your selection. See the Red Hat Customer Portal for details about the workaround, and Chapter 32, Kickstart Installations for information about Kickstart files. If you do not use the workaround, the installation will complete successfully, but no firewall will be configured, presenting a security risk. If you choose to accept the current package list, skip ahead to Section 16.20, "Installing Packages" . To select a component, click on the checkbox beside it (refer to Figure 16.47, "Package Group Selection" ). To customize your package set further, select the Customize now option on the screen. Clicking takes you to the Package Group Selection screen. 16.19.1. Installing from Additional Repositories You can define additional repositories to increase the software available to your system during installation. A repository is a network location that stores software packages along with metadata that describes them. Many of the software packages used in Red Hat Enterprise Linux require other software to be installed. The installer uses the metadata to ensure that these requirements are met for every piece of software you select for installation. The Red Hat Enterprise Linux repository is automatically selected for you. It contains the complete collection of software that was released as Red Hat Enterprise Linux 6.9, with the various pieces of software in their versions that were current at the time of release. Figure 16.48. Adding a software repository To include software from extra repositories , select Add additional software repositories and provide the location of the repository. To edit an existing software repository location, select the repository in the list and then select Modify repository . If you change the repository information during a non-network installation, such as from a Red Hat Enterprise Linux DVD, the installer prompts you for network configuration information. Figure 16.49. Select network interface Select an interface from the drop-down menu. Click OK . Anaconda then starts NetworkManager to allow you to configure the interface. Figure 16.50. Network Connections For details of how to use NetworkManager , refer to Section 16.9, "Setting the Hostname" If you select Add additional software repositories , the Edit repository dialog appears. Provide a Repository name and the Repository URL for its location. Once you have located a mirror, to determine the URL to use, find the directory on the mirror that contains a directory named repodata . Once you provide information for an additional repository, the installer reads the package metadata over the network. Software that is specially marked is then included in the package group selection system. Warning If you choose Back from the package selection screen, any extra repository data you may have entered is lost. This allows you to effectively cancel extra repositories. Currently there is no way to cancel only a single repository once entered. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-pkgselection-ppc |
Chapter 1. Overview of Builds | Chapter 1. Overview of Builds Builds is an extensible build framework based on the Shipwright project , which you can use to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfiles by using image build tools, such as Source-to-Image (S2I) and Buildah. You can create and apply build resources, view logs of build runs, and manage builds in your OpenShift Container Platform namespaces. Builds includes the following capabilities: Standard Kubernetes-native API for building container images from source code and Dockerfiles Support for Source-to-Image (S2I) and Buildah build strategies Extensibility with your own custom build strategies Execution of builds from source code in a local directory Shipwright CLI for creating and viewing logs, and managing builds on the cluster Integrated user experience with the Developer perspective of the OpenShift Container Platform web console Note Because Builds releases on a different cadence from OpenShift Container Platform, the Builds documentation is now available as a separate documentation set at builds for Red Hat OpenShift . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_shipwright/overview-openshift-builds |
18.3.4.4. Additional Match Option Modules | 18.3.4.4. Additional Match Option Modules Additional match options are also available through modules loaded by the iptables command. To use a match option module, load the module by name using the -m option, such as -m <module-name> (replacing <module-name> with the name of the module). A large number of modules are available by default. It is even possible to create modules that provide additional functionality. The following is a partial list of the most commonly used modules: limit module - Places limits on how many packets are matched to a particular rule. This is especially beneficial when used in conjunction with the LOG target as it can prevent a flood of matching packets from filling up the system log with repetitive messages or using up system resources. Refer to Section 18.3.5, "Target Options" for more information about the LOG target. The limit module enables the following options: --limit - Sets the number of matches for a particular range of time, specified with a number and time modifier arranged in a <number>/<time> format. For example, using --limit 5/hour only lets a rule match 5 times in a single hour. If a number and time modifier are not used, the default value of 3/hour is assumed. --limit-burst - Sets a limit on the number of packets able to match a rule at one time. This option should be used in conjunction with the --limit option, and it accepts a number to set the burst threshold. If no number is specified, only five packets are initially able to match the rule. state module - Enables state matching. The state module enables the following options: --state - match a packet with the following connection states: ESTABLISHED - The matching packet is associated with other packets in an established connection. INVALID - The matching packet cannot be tied to a known connection. NEW - The matching packet is either creating a new connection or is part of a two-way connection not previously seen. RELATED - The matching packet is starting a new connection related in some way to an existing connection. These connection states can be used in combination with one another by separating them with commas, such as -m state --state INVALID,NEW . mac module - Enables hardware MAC address matching. The mac module enables the following option: --mac-source - Matches a MAC address of the network interface card that sent the packet. To exclude a MAC address from a rule, place an exclamation point character ( ! ) after the --mac-source match option. To view other match options available through modules, refer to the iptables man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-iptables-options-match-modules |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/rhdg-downloads_datagrid |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.22/proc-providing-feedback-on-redhat-documentation |
Chapter 1. The Ceph Object Gateway | Chapter 1. The Ceph Object Gateway Ceph Object Gateway, also known as RADOS Gateway (RGW), is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. Ceph Object Gateway supports three interfaces: S3-compatibility: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. You can run S3 select to accelerate throughput. Users can run S3 select queries directly without a mediator. There are two S3 select workflows, one for CSV and one for Apache Parquet (Parquet), that provide S3 select operations with CSV and Parquet objects. For more details about these S3 select operations, see section S3 select operations in the Red Hat Ceph Storage Developer Guide . Swift-compatibility: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API. The Ceph Object Gateway is a service interacting with a Ceph storage cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management system. Ceph Object Gateway can store data in the same Ceph storage cluster used to store data from Ceph block device clients; however, it would involve separate pools and likely a different CRUSH hierarchy. The S3 and Swift APIs share a common namespace, so you can write data with one API and retrieve it with the other. Administrative API: Provides an administrative interface for managing the Ceph Object Gateways. Administrative API requests are done on a URI that starts with the admin resource end point. Authorization for the administrative API mimics the S3 authorization convention. Some operations require the user to have special administrative capabilities. The response type can be either XML or JSON by specifying the format option in the request, but defaults to the JSON format. Introduction to WORM Write-Once-Read-Many (WORM) is a secured data storage model that is used to guarantee data protection and data retrieval even in cases where objects and buckets are compromised in production zones. In Red Hat Ceph Storage, data security is achieved through the use of S3 Object Lock with read-only capability that is used to store objects and buckets using a Write-Once-Read-Many (WORM) model, preventing them from being deleted or overwritten. They cannot be deleted even by the Red Hat Ceph Storage administrator. S3 Object Lock provides two retention modes: GOVERNANCE COMPLIANCE These retention modes apply different levels of protection to your objects. You can apply either retention mode to any object version that is protected by Object Lock. In GOVERNANCE, users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions. With GOVERNANCE mode, you can protect objects against deletion by most users, although you can still grant some users permission to alter the retention settings or delete the object if necessary. In COMPLIANCE mode, a protected object version cannot be overwritten or deleted by any user. When an object is locked in COMPLIANCE mode, its retention mode cannot be changed or shortened. Additional Resources See Enabling object lock for S3 in the Red Hat Ceph Storage Object Gateway Guide for more details. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/object_gateway_guide/the-ceph-object-gateway |
4.7. Synchronizing Configuration Files | 4.7. Synchronizing Configuration Files After configuring the primary LVS router, there are several configuration files that must be copied to the backup LVS router before you start the Load Balancer Add-On. These files include: /etc/sysconfig/ha/lvs.cf - the configuration file for the LVS routers. /etc/sysctl - the configuration file that, among other things, turns on packet forwarding in the kernel. /etc/sysconfig/iptables - If you are using firewall marks, you should synchronize one of these files based on which network packet filter you are using. Important The /etc/sysctl.conf and /etc/sysconfig/iptables files do not change when you configure the Load Balancer Add-On using the Piranha Configuration Tool. 4.7.1. Synchronizing lvs.cf Anytime the LVS configuration file, /etc/sysconfig/ha/lvs.cf , is created or updated, you must copy it to the backup LVS router node. Warning Both the active and backup LVS router nodes must have identical lvs.cf files. Mismatched LVS configuration files between the LVS router nodes can prevent failover. The best way to do this is to use the scp command. Important To use scp the sshd must be running on the backup router, see Section 2.1, "Configuring Services on the LVS Router" for details on how to properly configure the necessary services on the LVS routers. Issue the following command as the root user from the primary LVS router to sync the lvs.cf files between the router nodes: scp /etc/sysconfig/ha/lvs.cf n.n.n.n :/etc/sysconfig/ha/lvs.cf In the command, replace n.n.n.n with the real IP address of the backup LVS router. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-sync-vsa |
Installing Red Hat Trusted Application Pipeline | Installing Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to install Red Hat Trusted Application Pipeline in your cluster. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/installing_red_hat_trusted_application_pipeline/index |
Chapter 2. Jenkins agent | Chapter 2. Jenkins agent OpenShift Container Platform provides a base image for use as a Jenkins agent. The Base image for Jenkins agents does the following: Pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones, including git , tar , zip , and nss , among others. Establishes the JNLP agent as the entry point. Includes the oc client tool for invoking command line operations from within Jenkins jobs. Provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. Important Use a version of the agent image that is appropriate for your OpenShift Container Platform release version. Embedding an oc client version that is not compatible with the OpenShift Container Platform version can cause unexpected behavior. The OpenShift Container Platform Jenkins image also defines the following sample java-builder pod template to illustrate how you can use the agent image with the Jenkins Kubernetes plugin. The java-builder pod template employs two containers: * A jnlp container that uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. * A java container that uses the java OpenShift Container Platform Sample ImageStream, which contains the various Java binaries, including the Maven binary mvn , for building code. 2.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> USD docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 2.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 2.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 2.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 2.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plugin value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas. | [
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/jenkins/images-other-jenkins-agent |
Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] | Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object Required storageClassName 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources capacity Quantity Capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. The semantic is currently (CSI spec 1.2) defined as: The available capacity, in bytes, of the storage that can be used to provision volumes. If not set, that information is currently unavailable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds maximumVolumeSize Quantity MaximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. This is defined since CSI spec 1.4.0 as the largest size that may be used in a CreateVolumeRequest.capacity_range.required_bytes field to create a volume with the same parameters as those in GetCapacityRequest. The corresponding value in the Kubernetes API is ResourceRequirements.Requests in a volume claim. metadata ObjectMeta Standard object's metadata. The name has no particular meaning. It must be be a DNS subdomain (dots allowed, 253 characters). To ensure that there are no conflicts with other CSI drivers on the cluster, the recommendation is to use csisc-<uuid>, a generated name, or a reverse-domain name which ends with the unique CSI driver name. Objects are namespaced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata nodeTopology LabelSelector NodeTopology defines which nodes have access to the storage for which capacity was reported. If not set, the storage is not accessible from any node in the cluster. If empty, the storage is accessible from all nodes. This field is immutable. storageClassName string The name of the StorageClass that the reported capacity applies to. It must meet the same requirements as the name of a StorageClass object (non-empty, DNS subdomain). If that object no longer exists, the CSIStorageCapacity object is obsolete and should be removed by its creator. This field is immutable. 4.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csistoragecapacities GET : list or watch objects of kind CSIStorageCapacity /apis/storage.k8s.io/v1/watch/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities DELETE : delete collection of CSIStorageCapacity GET : list or watch objects of kind CSIStorageCapacity POST : create a CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} DELETE : delete a CSIStorageCapacity GET : read the specified CSIStorageCapacity PATCH : partially update the specified CSIStorageCapacity PUT : replace the specified CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} GET : watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/storage.k8s.io/v1/csistoragecapacities Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.2. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty 4.2.2. /apis/storage.k8s.io/v1/watch/csistoragecapacities Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSIStorageCapacity Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIStorageCapacity Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 202 - Accepted CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.4. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSIStorageCapacity Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIStorageCapacity Table 4.23. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIStorageCapacity Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIStorageCapacity Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.6. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/csistoragecapacity-storage-k8s-io-v1 |
Chapter 2. The Ceph File System Metadata Server | Chapter 2. The Ceph File System Metadata Server Additional Resources As a storage administrator, you can learn about the different states of the Ceph File System (CephFS) Metadata Server (MDS), along with learning about CephFS MDS ranking mechanics, configuring the MDS standby daemon, and cache size limits. Knowing these concepts can enable you to configure the MDS daemons for a storage environment. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Metadata Server daemons ( ceph-mds ). See the Management of MDS service using the Ceph Orchestrator section in the Red Hat Ceph Storage File System Guide for details on configuring MDS daemons. 2.1. Metadata Server daemon states The Metadata Server (MDS) daemons operate in two states: Active - manages metadata for files and directories stores on the Ceph File System. Standby - serves as a backup, and becomes active when an active MDS daemon becomes unresponsive. By default, a Ceph File System uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. You can configure the file system to use multiple active MDS daemons so that you can scale metadata performance for larger workloads. The active MDS daemons dynamically share the metadata workload when metadata load patterns change. Note that systems with multiple active MDS daemons still require standby MDS daemons to remain highly available. What Happens When the Active MDS Daemon Fails When the active MDS becomes unresponsive, a Ceph Monitor daemon waits a number of seconds equal to the value specified in the mds_beacon_grace option. If the active MDS is still unresponsive after the specified time period has passed, the Ceph Monitor marks the MDS daemon as laggy . One of the standby daemons becomes active, depending on the configuration. Note To change the value of mds_beacon_grace , add this option to the Ceph configuration file and specify the new value. 2.2. Metadata Server ranks Each Ceph File System (CephFS) has a number of ranks, one by default, which starts at zero. Ranks define how the metadata workload is shared between multiple Metadata Server (MDS) daemons. The number of ranks is the maximum number of MDS daemons that can be active at one time. Each MDS daemon handles a subset of the CephFS metadata that is assigned to that rank. Each MDS daemon initially starts without a rank. The Ceph Monitor assigns a rank to the daemon. The MDS daemon can only hold one rank at a time. Daemons only lose ranks when they are stopped. The max_mds setting controls how many ranks will be created. The actual number of ranks in the CephFS is only increased if a spare daemon is available to accept the new rank. Rank States Ranks can be: Up - A rank that is assigned to the MDS daemon. Failed - A rank that is not associated with any MDS daemon. Damaged - A rank that is damaged; its metadata is corrupted or missing. Damaged ranks are not assigned to any MDS daemons until the operator fixes the problem, and uses the ceph mds repaired command on the damaged rank. 2.3. Metadata Server cache size limits You can limit the size of the Ceph File System (CephFS) Metadata Server (MDS) cache by: A memory limit : Use the mds_cache_memory_limit option. Red Hat recommends a value between 8 GB and 64 GB for mds_cache_memory_limit . Setting more cache can cause issues with recovery. This limit is approximately 66% of the desired maximum memory use of the MDS. Note The default value for mds_cache_memory_limit is 4 GB. Since the default value is outside the recommended range, Red Hat recommends setting the value within the mentioned range. Important Red Hat recommends using memory limits instead of inode count limits. Inode count : Use the mds_cache_size option. By default, limiting the MDS cache by inode count is disabled. In addition, you can specify a cache reservation by using the mds_cache_reservation option for MDS operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients to drop unused metadata in its cache. The mds_cache_reservation option replaces the mds_health_cache_threshold option in all situations, except when MDS nodes send a health alert to the Ceph Monitors indicating the cache is too large. By default, mds_health_cache_threshold is 150% of the maximum cache size. Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS client or MDS or misbehaving applications might cause the MDS to exceed its cache size. The mds_health_cache_threshold option configures the storage cluster health warning message, so that operators can investigate why the MDS cannot shrink its cache. Additional Resources See the Metadata Server daemon configuration reference section in the Red Hat Ceph Storage File System Guide for more information. 2.4. File system affinity You can configure a Ceph File System (CephFS) to prefer a particular Ceph Metadata Server (MDS) over another Ceph MDS. For example, you have MDS running on newer, faster hardware that you want to give preference to over a standby MDS running on older, maybe slower hardware. You can specify this preference by setting the mds_join_fs option, which enforces this file system affinity. Ceph Monitors give preference to MDS standby daemons with mds_join_fs equal to the file system name with the failed rank. The standby-replay daemons are selected before choosing another standby daemon. If no standby daemon exists with the mds_join_fs option, then the Ceph Monitors will choose an ordinary standby for replacement or any other available standby as a last resort. The Ceph Monitors will periodically examine Ceph File Systems to see if a standby with a stronger affinity is available to replace the Ceph MDS that has a lower affinity. Additional Resources See the Configuring file system affinity section in the Red Hat Ceph Storage File System Guide for details. 2.5. Management of MDS service using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. This section covers the following administrative tasks: Deploying the MDS service using the command line interface . Deploying the MDS service using the service specification . Removing the MDS service using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. 2.5.1. Deploying the MDS service using the command line interface Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement specification in the command line interface. Ceph File System (CephFS) requires one or more MDS. Note Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example There are two ways of deploying MDS daemons using placement specification: Method 1 Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts. Syntax Note By default, replicated pools are created for this command. Example Method 2 Create the pools, CephFS, and then deploy MDS service using placement specification: Create the pools for CephFS: Syntax Example Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system. Important For the metadata pool, consider to use: A higher replication level because any data loss to this pool can make the whole file system inaccessible. Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients. Create the file system for the data pools and metadata pools: Syntax Example Deploy MDS service using the ceph orch apply command: Syntax Example Verification List the service: Example Check the CephFS status: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). For information on setting the pool values, see Setting number of placement groups in a pool . 2.5.2. Deploying the MDS service using the service specification Using the Ceph Orchestrator, you can deploy the MDS service using the service specification. Note Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Create the mds.yaml file: Example Edit the mds.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Log into the Cephadm shell: Example Navigate to the following directory: Example Deploy MDS service using service specification: Syntax Example Once the MDS services is deployed and functional, create the CephFS: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). 2.5.3. Removing the MDS service using the Ceph Orchestrator You can remove the service using the ceph orch rm command. Alternatively, you can remove the file system and the associated pools. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one MDS daemon deployed on the hosts. Procedure There are two ways of removing MDS daemons from the cluster: Method 1 Remove the CephFS volume, associated pools, and the services: Log into the Cephadm shell: Example Set the configuration parameter mon_allow_pool_delete to true : Example Remove the file system: Syntax Example This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled ceph-mgr Orchestrator module. Method 2 Use the ceph orch rm command to remove the MDS service from the entire cluster: List the service: Example Remove the service Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the MDS service using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the MDS service using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 2.6. Configuring file system affinity Set the Ceph File System (CephFS) affinity for a particular Ceph Metadata Server (MDS). Prerequisites A healthy, and running Ceph File System. Root-level access to a Ceph Monitor node. Procedure Check the current state of a Ceph File System: Example Set the file system affinity: Syntax Example After a Ceph MDS failover event, the file system favors the standby daemon for which the affinity is set. Example 1 The mds.b daemon now has the join_fscid=27 in the file system dump output. Important If a file system is in a degraded or undersized state, then no failover will occur to enforce the file system affinity. Additional Resources See the File system affinity section in the Red Hat Ceph Storage File System Guide for more details. 2.7. Configuring multiple active Metadata Server daemons Configure multiple active Metadata Server (MDS) daemons to scale metadata performance for large systems. Important Do not convert all standby MDS daemons to active ones. A Ceph File System (CephFS) requires at least one standby MDS daemon to remain highly available. Prerequisites Ceph administration capabilities on the MDS node. Root-level access to a Ceph Monitor node. Procedure Set the max_mds parameter to the desired number of active MDS daemons: Syntax Example This example increases the number of active MDS daemons to two in the CephFS called cephfs Note Ceph only increases the actual number of ranks in the CephFS if a spare MDS daemon is available to take the new rank. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide for more details. See the Decreasing the number of active MDS Daemons section in the Red Hat Ceph Storage File System Guide for more details. See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. 2.8. Configuring the number of standby daemons Each Ceph File System (CephFS) can specify the required number of standby daemons to be considered healthy. This number also includes the standby-replay daemon waiting for a rank failure. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the expected number of standby daemons for a particular CephFS: Syntax Note Setting the NUMBER to zero disables the daemon health check. Example This example sets the expected standby daemon count to two. 2.9. Configuring the standby-replay Metadata Server Configure each Ceph File System (CephFS) by adding a standby-replay Metadata Server (MDS) daemon. Doing this reduces failover time if the active MDS becomes unavailable. This specific standby-replay daemon follows the active MDS's metadata journal. The standby-replay daemon is only used by the active MDS of the same rank, and is not available to other ranks. Important If using standby-replay, then every active MDS must have a standby-replay daemon. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the standby-replay for a particular CephFS: Syntax Example In this example, the Boolean value is 1 , which enables the standby-replay daemons to be assigned to the active Ceph MDS daemons. Additional Resources See the Using the ceph mds fail command section in the Red Hat Ceph Storage File System Guide for details. 2.10. Ephemeral pinning policies An ephemeral pin is a static partition of subtrees, and can be set with a policy using extended attributes. A policy can automatically set ephemeral pins to directories. When setting an ephemeral pin to a directory, it is automatically assigned to a particular rank, as to be uniformly distributed across all Ceph MDS ranks. Determining which rank gets assigned is done by a consistent hash and the directory's inode number. Ephemeral pins do not persist when the directory's inode is dropped from file system cache. When failing over a Ceph Metadata Server (MDS), the ephemeral pin is recorded in its journal so the Ceph MDS standby server does not lose this information. There are two types of policies for using ephemeral pins: Note The attr and jq packages must be installed as a prerequisite for the ephemeral pinning policies. Distributed This policy enforces that all of a directory's immediate children must be ephemerally pinned. For example, use a distributed policy to spread a user's home directory across the entire Ceph File System cluster. Enable this policy by setting the ceph.dir.pin.distributed extended attribute. Syntax Example Random This policy enforces a chance that any descendent subdirectory might be ephemerally pinned. You can customize the percent of directories that can be ephemerally pinned. Enable this policy by setting the ceph.dir.pin.random and setting a percentage. Red Hat recommends setting this percentage to a value smaller than 1% ( 0.01 ). Having too many subtree partitions can cause slow performance. You can set the maximum percentage by setting the mds_export_ephemeral_random_max Ceph MDS configuration option. The parameters mds_export_ephemeral_distributed and mds_export_ephemeral_random are already enabled. Syntax Example After enabling pinning, you can verify by running either of the following commands: Syntax Example Example If the directory is pinned, the value of export_pin is 0 if it is pinned to rank 0 , 1 if it is pinned to rank 1 , and so on. If the directory is not pinned, the value is -1 . To remove a partitioning policy, remove the extended attributes or set the value to 0 . Syntax Example You can verify by running either of the following commands .Syntax Example For export pins, remove the extended attribute or set the extended attribute to -1 . Syntax Example Additional Resources See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage File System Guide for details on manually setting pins. 2.11. Manually pinning directory trees to a particular rank Sometimes it might be desirable to override the dynamic balancer with explicit mappings of metadata to a particular Ceph Metadata Server (MDS) rank. You can do this manually to evenly spread the load of an application or to limit the impact of users' metadata requests on the Ceph File System cluster. Manually pinning directories is also known as an export pin by setting the ceph.dir.pin extended attribute. A directory's export pin is inherited from its closest parent directory, but can be overwritten by setting an export pin on that directory. Setting an export pin on a directory affects all of its sub-directories, for example: 1 Directories a/ and a/b both start without an export pin set. 2 Directories a/ and a/b are now pinned to rank 1 . 3 Directory a/b is now pinned to rank 0 and directory a/ and the rest of its sub-directories are still pinned to rank 1 . Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph File System. Root-level access to the CephFS client. Installation of the attr package. Procedure Set the export pin on a directory: Syntax Example Additional Resources See the Ephemeral pinning policies section in the Red Hat Ceph Storage File System Guide for details on automatically setting pins. 2.12. Decreasing the number of active Metadata Server daemons How to decrease the number of active Ceph File System (CephFS) Metadata Server (MDS) daemons. Prerequisites The rank that you will remove must be active first, meaning that you must have the same number of MDS daemons as specified by the max_mds parameter. Root-level access to a Ceph Monitor node. Procedure Set the same number of MDS daemons as specified by the max_mds parameter: Syntax Example On a node with administration capabilities, change the max_mds parameter to the desired number of active MDS daemons: Syntax Example Wait for the storage cluster to stabilize to the new max_mds value by watching the Ceph File System status. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide . See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide . See the Red Hat Ceph Storage Installation Guide for details on installing a Red Hat Ceph Storage cluster. 2.13. Viewing metrics for Ceph metadata server clients You can use the command-line interface to view the metrics for the Ceph metadata server (MDS). CephFS uses Perf Counters to track metrics. You can view the metrics using the counter dump command. Prequisites A running IBM Storage Ceph cluster. Procedure Get the name of the mds service: Syntax Check the MDS per client metrics: Syntax Example Client metrics description CephFS exports client metrics as Labeled Perf Counters, which you can use to monitor the client performance. CephFS exports the below client metrics: NAME TYPE DESCRIPTION cap_hits Gauge Percentage of file capability hits over total number of caps. cap_miss Gauge Percentage of file capability misses over total number of caps. avg_read_latency Gauge Mean value of the read latencies. avg_write_latency Gauge Mean value of the write latencies. avg_metadata_latency Gauge Mean value of the metadata latencies dentry_lease_hits Gauge Percentage of dentry lease hits handed out over the total dentry lease request. dentry_lease_miss Gauge Percentage of dentry lease misses handed out over the total dentry lease requests. opened_files Gauge Number of opened files. opened_inodes Gauge Number of opened inode. pinned_icaps Gauge Number of pinned Inode Caps. total_inodes Gauge Total number of Nodes. total_read_ops Gauge Total number of read operations generated by all process. total_read_size Gauge Number of bytes read in input/output operations generated by all process. total_write_ops Gauge Total number of write operations generated by all process. total_write_size Gauge Number of bytes written in input/output operations generated by all processes. | [
"cephadm shell",
"ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph fs volume create test --placement=\"2 host01 host02\"",
"ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]",
"ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64",
"ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL",
"ceph fs new test cephfs_metadata cephfs_data",
"ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mds test --placement=\"2 host01 host02\"",
"ceph orch ls",
"ceph fs ls ceph fs status",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"touch mds.yaml",
"service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3",
"service_type: mds service_id: fs_name placement: hosts: - host01 - host02",
"cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml",
"cd /var/lib/ceph/mds/",
"cephadm shell",
"cd /var/lib/ceph/mds/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mds.yaml",
"ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL",
"ceph fs new test metadata_pool data_pool",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"cephadm shell",
"ceph config set mon mon_allow_pool_delete true",
"ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it",
"ceph fs volume rm cephfs-new --yes-i-really-mean-it",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm mds.test",
"ceph orch ps",
"ceph orch ps",
"ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]",
"ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME",
"ceph config set mds.b mds_join_fs cephfs01",
"ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]",
"ceph fs set NAME max_mds NUMBER",
"ceph fs set cephfs max_mds 2",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+",
"ceph fs set FS_NAME standby_count_wanted NUMBER",
"ceph fs set cephfs standby_count_wanted 2",
"ceph fs set FS_NAME allow_standby_replay 1",
"ceph fs set cephfs allow_standby_replay 1",
"setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH",
"setfattr -n ceph.dir.pin.distributed -v 1 dir1/",
"setfattr -n ceph.dir.pin.random -v PERCENTAGE_IN_DECIMAL DIRECTORY_PATH",
"setfattr -n ceph.dir.pin.random -v 0.01 dir1/",
"getfattr -n ceph.dir.pin.random DIRECTORY_PATH getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH",
"getfattr -n ceph.dir.pin.distributed dir1/ file: dir1/ ceph.dir.pin.distributed=\"1\" getfattr -n ceph.dir.pin.random dir1/ file: dir1/ ceph.dir.pin.random=\"0.01\"",
"ceph tell mds.a get subtrees | jq '.[] | [.dir.path, .auth_first, .export_pin]'",
"setfattr -n ceph.dir.pin.distributed -v 0 DIRECTORY_PATH",
"setfattr -n ceph.dir.pin.distributed -v 0 dir1/",
"getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH",
"getfattr -n ceph.dir.pin.distributed dir1/",
"setfattr -n ceph.dir.pin -v -1 DIRECTORY_PATH",
"setfattr -n ceph.dir.pin -v -1 dir1/",
"mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3",
"setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY",
"setfattr -n ceph.dir.pin -v 2 cephfs/home",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+",
"ceph fs set NAME max_mds NUMBER",
"ceph fs set cephfs max_mds 1",
"ceph fs status NAME",
"ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+",
"ceph orch ps | grep mds",
"ceph tell MDS_SERVICE_NAME counter dump",
"ceph tell mds.cephfs.ceph2-hk-n-0mfqao-node4.isztbk counter dump [ { \"key\": \"mds_client_metrics\", \"value\": [ { \"labels\": { \"fs_name\": \"cephfs\", \"id\": \"24379\" }, \"counters\": { \"num_clients\": 4 } } ] }, { \"key\": \"mds_client_metrics-cephfs\", \"value\": [ { \"labels\": { \"client\": \"client.24413\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } }, { \"labels\": { \"client\": \"client.24502\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 921403, \"cap_miss\": 102382, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17117, \"dentry_lease_miss\": 204710, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24508\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 928694, \"cap_miss\": 103183, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17217, \"dentry_lease_miss\": 206348, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24520\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } } ] } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/the-ceph-file-system-metadata-server |
2. New Packages | 2. New Packages 2.1. RHEA-2011:0533 - new package: 389-ds-base New 389-ds-base packages are now available for Red Hat Enterprise Linux 6. The 389 Directory Server is an LDAPv3 compliant server. The 389-ds-base package includes the LDAP server and command line utilities for server administration. This enhancement update adds the 389-ds-base package to Red Hat Enterprise Linux 6. (BZ# 642408 ) All users who require the 389 Directory Server are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s02 |
Updating OpenShift Data Foundation | Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/updating_openshift_data_foundation/index |
3.3. Using Eviction | 3.3. Using Eviction In Red Hat JBoss Data Grid, eviction is disabled by default. If an empty < eviction /> element is used to enable eviction without any strategy or maximum entries settings, the following default values are automatically implemented: Strategy: If no eviction strategy is specified, EvictionStrategy.NONE is assumed as a default. max-entries/maxEntries: If no value is specified, the max-entries /maxEntries value is set to -1 , which allows unlimited entries. Report a bug 3.3.1. Initialize Eviction To initialize eviction, set the eviction element's max-entries attributes value to a number greater than zero. Adjust the value set for max-entries to discover the optimal value for your configuration. It is important to remember that if too large a value is set for max-entries , Red Hat JBoss Data Grid runs out of memory. The following procedure outlines the steps to initialize eviction in JBoss Data Grid: Procedure 3.1. Initialize Eviction Add the Eviction Tag Add the <eviction> tag to your project's <cache> tags as follows: Set the Eviction Strategy Set the strategy value to set the eviction strategy employed. Possible values are LRU , UNORDERED and LIRS (or NONE if no eviction is required). The following is an example of this step: Set the Maximum Entries Set the maximum number of entries allowed in memory. The default value is -1 for unlimited entries. In Library mode, set the maxEntries parameter as follows: In Remote Client Server mode, set the max-entries as follows: Result Eviction is configured for the target cache. Report a bug 3.3.2. Eviction Configuration Examples Configure eviction in Red Hat JBoss Data Grid using the configuration bean or the XML file. Eviction configuration is done on a per-cache basis. A sample XML configuration for Library mode is as follows: A sample XML configuration for Remote Client Server Mode is as follows: A sample programmatic configuration for Library Mode is as follows: Note JBoss Data Grid's Library mode uses the maxEntries parameter while Remote Client-Server mode uses the max-entries parameter to configure eviction. Report a bug 3.3.3. Changing the Maximum Entries Value at Runtime The max-entries value for eviction can be configured for a clustered cache without restarting the server. This configuration is performed on each node in the cluster. To change the max-entries value in the eviction configuration, perform these steps: In the cache JMX entry, invoke the setMaxEntries operation. Invoking the setMaxEntries operation sets maximum number of entries in the data container. If the data container does not support eviction, setting it will raise an exception. Defining a value less than 0 will throw an error. Report a bug 3.3.4. Eviction Configuration Troubleshooting In Red Hat JBoss Data Grid, the size of a cache can be larger than the value specified for the max-entries parameter of the eviction element. This is because although the max-entries value can be configured to a value that is not a power of two, the underlying algorithm will alter the value to V , where V is the closest power of two value that is larger than the max-entries value. Eviction algorithms are in place to ensure that the size of the cache container will never exceed the value V . Report a bug 3.3.5. Eviction and Passivation To ensure that a single copy of an entry remains, either in memory or in a cache store, use passivation in conjunction with eviction. The primary reason to use passivation instead of a normal cache store is that updating entries require less resources when passivation is in use. This is because passivation does not require an update to the cache store. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | [
"<eviction />",
"<eviction strategy=\"LRU\" />",
"<eviction strategy=\"LRU\" maxEntries=\"200\" />",
"<eviction strategy=\"LRU\" max-entries=\"200\" />",
"<eviction strategy=\"LRU\" maxEntries=\"2000\"/>",
"<eviction strategy=\"LRU\" max-entries=\"20\"/>",
"Configuration c = new ConfigurationBuilder().eviction().strategy(EvictionStrategy.LRU) .maxEntries(2000) .build();"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-using_eviction |
Part II. Routing Expression and Predicate Languages | Part II. Routing Expression and Predicate Languages This guide describes the basic syntax used by the evaluative languages supported by Apache Camel. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/FuseMRExpLang |
Chapter 2. Configuring a private cluster | Chapter 2. Configuring a private cluster After you install an OpenShift Container Platform version 4.9 cluster, you can set some of its core components to be private. 2.1. About private clusters By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. DNS If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps , for the Ingress object, and api , for the API server. The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster. Ingress Controller Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. You can replace the default Ingress Controller with an internal one. API server By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic. On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster's access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path. On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer. On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster. 2.2. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 2.3. Setting the Ingress Controller to private After you deploy a cluster, you can modify its Ingress Controller to use only a private zone. Procedure Modify the default Ingress Controller to use only an internal endpoint: USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF Example output ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced The public DNS entry is removed, and the private zone entry is updated. 2.4. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for AWS or Azure, take the following actions: Locate and delete appropriate load balancer component. For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. For Azure, delete the api-internal rule for the load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers: Important You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers. From your terminal, list the cluster machines: USD oc get machine -n openshift-machine-api Example output NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m You modify the control plane machines, which contain master in the name, in the following step. Remove the external load balancer from each control plane machine. Edit a control plane Machine object to remove the reference to the external load balancer: USD oc edit machines -n openshift-machine-api <master_name> 1 1 Specify the name of the control plane, or master, Machine object to modify. Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification: ... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. Repeat this process for each of the machines that contains master in the name. | [
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}",
"oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced",
"oc get machine -n openshift-machine-api",
"NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m",
"oc edit machines -n openshift-machine-api <master_name> 1",
"spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/configuring-private-cluster |
Installing IBM Cloud Bare Metal (Classic) | Installing IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_ibm_cloud_bare_metal_classic/index |
Chapter 11. Reviewing cluster configuration | Chapter 11. Reviewing cluster configuration Learn how to use the Configuration Management view and understand the correlation between various entities in your cluster to manage your cluster configuration efficiently. Every OpenShift Container Platform cluster includes many different entities distributed throughout the cluster, which makes it more challenging to understand and act on the available information. Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides efficient configuration management that combines all these distributed entities on a single page. It brings together information about all your clusters, namespaces, nodes, deployments, images, secrets, users, groups, service accounts, and roles in a single Configuration Management view, helping you visualize different entities and the connections between them. 11.1. Using the Configuration Management view To open the Configuration Management view, select Configuration Management from the navigation menu. Similar to the Dashboard , it displays some useful widgets. These widgets are interactive and show the following information: Security policy violations by severity The state of CIS (Center for Information Security) for Kubernetes benchmark controls Users with administrator rights in the most clusters Secrets used most widely in your clusters The header in the Configuration Management view shows you the number of policies and CIS controls in your cluster. Note Only policies in the Deploy life cycle phase are included in the policy count and policy list view. The header includes drop-down menus that allow you to switch between entities. For example, you can: Click Policies to view all policies and their severity, or select CIS Controls to view detailed information about all controls. Click Application and Infrastructure and select clusters, namespaces, nodes, deployments, images, and secrets to view detailed information. Click RBAC Visibility and Configuration and select users and groups, service accounts, and roles to view detailed information. 11.2. Identifying misconfigurations in Kubernetes roles You can use the Configuration Management view to identify potential misconfigurations, such as users, groups, or service accounts granted the cluster-admin role, or roles that are not granted to anyone. 11.2.1. Finding Kubernetes roles and their assignment Use the Configuration Management view to get information about the Kubernetes roles that are assigned to specific users and groups. Procedure Go to the RHACS portal and click Configuration Management . Select Role-Based Access Control Users and Groups from the header in the Configuration Management view. The Users and Groups view displays a list of Kubernetes users and groups, their assigned roles, and whether the cluster-admin role is enabled for each of them. Select a user or group to view more details about the associated cluster and namespace permissions. 11.2.2. Finding service accounts and their permissions Use the Configuration Management view to find out where service accounts are in use and their permissions. Procedure In the RHACS portal, go to Configuration Management . Select RBAC Visibility and Configuration Service Accounts from the header in the Configuration Management view. The Service Accounts view displays a list of Kubernetes service accounts across your clusters, their assigned roles, whether the cluster-admin role is enabled, and which deployments use them. Select a row or an underlined link to view more details, including which cluster and namespace permissions are granted to the selected service account. 11.2.3. Finding unused Kubernetes roles Use the Configuration Management view to get more information about your Kubernetes roles and find unused roles. Procedure In the RHACS portal, go to Configuration Management . Select RBAC Visibility and Configuration Roles from the header in the Configuration Management view. The Roles view displays a list of Kubernetes roles across your clusters, the permissions they grant, and where they are used. Select a row or an underlined link to view more details about the role. To find roles not granted to any users, groups, or service accounts, select the Users & Groups column header. Then select the Service Account column header while holding the Shift key. The list shows the roles that are not granted to any users, groups, or service accounts. 11.3. Viewing Kubernetes secrets View Kubernetes secrets in use in your environment and identify deployments using those secrets. Procedure In the RHACS portal, go to Configuration Management . On the Secrets Most Used Across Deployments widget, select View All . The Secrets view displays a list of Kubernetes secrets. Select a row to view more details. Use the available information to identify if the secrets are in use in deployments where they are not needed. 11.4. Finding policy violations The Policy Violations by Severity widget in the Configuration Management view displays policy violations in a sunburst chart. Each level of the chart is represented by one ring or circle. The innermost circle represents the total number of violations. The ring represents the Low , Medium , High , and Critical policy categories. The outermost ring represents individual policies in a particular category. The Configuration Management view only shows the information about policies that have the Lifecycle Stage set to Deploy . It does not include policies that address runtime behavior or those configured for assessment in the Build stage. Procedure In the RHACS portal, go to Configuration Management . On the Policy Violations by Severity widget, move your mouse over the sunburst chart to view details about policy violations. Select n rated as high , where n is a number, to view detailed information about high-priority policy violations. The Policies view displays a list of policy violations filtered on the selected category. Select a row to view more details, including policy description, remediation, deployments with violations, and more. The details are visible in a panel. The Policy Findings section in the information panel lists deployments where these violations occurred. Select a deployment under the Policy Findings section to view related details including Kubernetes labels, annotations, and service account. You can use the detailed information to plan a remediation for violations. 11.5. Finding failing CIS controls Similar to the Policy Violations sunburst chart in the Configuration Management view, the CIS Kubernetes v1.5 widget provides information about failing Center for Information Security (CIS) controls. Each level of the chart is represented by one ring or circle. The innermost circle represents the percentage of failing controls. The ring represents the control categories. The outermost ring represents individual controls in a particular category. Procedure To view details about failing controls, hover over the sunburst chart. To view detailed information about failing controls, select n Controls Failing , where n is a number. The Controls view displays a list of failing controls filtered based on the compliance state. Select a row to view more details, including control descriptions and nodes where the controls are failing. The Control Findings section in the information panel lists nodes where the controls are failing. Select a row to view more details, including Kubernetes labels, annotations, and other metadata. You can use the detailed information to focus on a subset of nodes, industry standards, or failing controls. You can also assess, check, and report on the compliance status of your containerized infrastructure. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/review-cluster-configuration |
8.40. cups | 8.40. cups 8.40.1. RHSA-2014:1388 - Moderate: cups security and bug fix update Updated cups packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. CUPS provides a portable printing layer for Linux, UNIX, and similar operating systems. Security Fixes CVE-2014-2856 A cross-site scripting (XSS) flaw was found in the CUPS web interface. An attacker could use this flaw to perform a cross-site scripting attack against users of the CUPS web interface. CVE-2014-3537 CVE-2014-5029 CVE-2014-5030 CVE-2014-5031 It was discovered that CUPS allowed certain users to create symbolic links in certain directories under /var/cache/cups/ . A local user with the lp group privileges could use this flaw to read the contents of arbitrary files on the system or, potentially, escalate their privileges on the system. The CVE-2014-3537 issue was discovered by Francisco Alonso of Red Hat Product Security. Bug Fixes BZ# 769292 When the system was suspended during polling a configured BrowsePoll server, resuming the system left the cups-polld process awaiting a response even though the connection had been dropped causing discovered printers to disappear. Now, an HTTP timeout is used so the request can be retried. As a result, printers that use BrowsePoll now remain available in the described scenario. BZ# 852846 A problem with HTTP multipart handling in the CUPS scheduler caused some browsers to not work correctly when attempting to add a printer using the web interface. This has been fixed by applying a patch from a later version, and all browsers now work as expected when adding printers. BZ# 855431 When a discovered remote queue was determined to no longer be available, the local queue was deleted. A logic error in the CUPS scheduler caused problems in this situation when there was a job queued for such a destination. This bug has been fixed so that jobs are not started for removed queues. BZ# 884851 CUPS maintains a cache of frequently used string values. Previously, when a returned string value was modified, the cache lost its consistency, which led to increased memory usage. Instances where this happened have been corrected to treat the returned values as read-only. BZ# 971079 A missing check has been added, preventing the scheduler from terminating when logging a message about not being able to determine a job's file type. BZ# 978387 A fix for incorrect handling of collection attributes in the Internet Printing Protocol (IPP) version 2.0 replies has been applied. BZ# 984883 The CUPS scheduler did not use the fsync() function when modifying its state files, such as printers.conf , which could lead to truncated CUPS configuration files in the event of power loss. A new cupsd.conf directive, SyncOnClose , has been added to enable the use of fsync() on such files. The directive is enabled by default. BZ# 986495 The default environment variables for jobs were set before the CUPS configuration file was read, leading to the SetEnv directive in the cupsd.conf file having no effect. The variables are now set after reading the configuration, and SetEnv works correctly. BZ# 988598 Older versions of the RPM Package Manager (RPM) were unable to build the cups packages due to a newer syntax being used in the spec file. More portable syntax is now used, allowing older versions to build CUPS as expected. BZ# 1011076 A spelling typo in one of the example options for the cupsctl command has been fixed in the cupsctl(8) man page. BZ# 1012482 The cron script shipped with CUPS had incorrect permissions, allowing world-readability on the script. This file is now given permissions " 0700 " , removing group- and world-readability permissions. BZ# 1040293 The Generic Security Services (GSS) credentials were cached under certain circumstances. This behavior is incorrect because sending the cached copy could result in a denial due to an apparent " replay " attack. A patch has been applied to prevent replaying the GSS credentials. BZ# 1104483 A logic error in the code handling the web interface made it not possible to change the Make and Model field for a queue in the web interface. A patch has been applied to fix this bug and the field can now be changed as expected. BZ# 1110045 The CUPS scheduler did not check whether the client connection had data available to read before reading. This behavior led to a 10 second timeout in some instances. The scheduler now checks for data availability before reading, avoiding the timeout. BZ# 1120419 The Common Gateway Interface (CGI) scripts were not executed correctly by the CUPS scheduler, causing requests to such scripts to fail. Parameter handling for the CGI scripts has been fixed by applying a patch and the scripts can now be executed properly. All cups users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. After installing this update, the cupsd daemon will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/cups |
Chapter 1. Clair security scanner | Chapter 1. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 1.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 1.1.1. Clair releases New versions of Clair are regularly released. The source code needed to build Clair is packaged as an archive and attached to each release. Clair releases can be found at Clair releases . Release artifacts also include the clairctl command line interface tool, which obtains updater data from the internet by using an open host. Clair 4.8 Clair 4.8 was released on 24-10-28. The following changes have been made: Clair on Red Hat Quay now requires that you update the Clair PostgreSQL database from version 13 to version 15. For more information about this procedure, see Upgrading the Clair PostgreSQL database . This release deprecates the updaters that rely on the Red Hat OVAL v2 security data in favor of the Red Hat VEX data. This change includes a database migration to delete all the vulnerabilities that originated from the OVAL v2 feeds. Because of this, there could be intermittent downtime in production environments before the VEX updater complete for the first time when no vulnerabilities exist. 1.1.1.1. Clair 4.8.0 known issues When pushing Suse Enterprise Linux Images with HIGH image vulnerabilities, Clair 4.8.0 does not report these vulnerabilities. This is a known issue and will be fixed in a future version of Red Hat Quay. Clair 4.7.4 Clair 4.7.4 was released on 2024-05-01. The following changes have been made: The default layer download location has changed. For more information, see Disk usage considerations . Clair 4.7.3 Clair 4.7.3 was released on 2024-02-26. The following changes have been made: The minimum TLS version for Clair is now 1.2. Previously, servers allowed for 1.1 connections. Clair 4.7.2 Clair 4.7.2 was released on 2023-10-09. The following changes have been made: CRDA support has been removed. Clair 4.7.1 Clair 4.7.1 was released as part of Red Hat Quay 3.9.1. The following changes have been made: With this release, you can view unpatched vulnerabilities from Red Hat Enterprise Linux (RHEL) sources. If you want to view unpatched vulnerabilities, you can the set ignore_unpatched parameter to false . For example: updaters: config: rhel: ignore_unpatched: false To disable this feature, you can set ignore_unpatched to true . Clair 4.7 Clair 4.7 was released as part of Red Hat Quay 3.9, and includes support for the following features: Native support for indexing Golang modules and RubeGems in container images. Change to OSV.dev as the vulnerability database source for any programming language package managers. This includes popular sources like GitHub Security Advisories or PyPA. This allows offline capability. Use of pyup.io for Python and CRDA for Java is suspended. Clair now supports Java, Golang, Python, and Ruby dependencies. 1.1.2. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 1.1.3. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 1.1.4. Clair containers Official downstream Clair containers bundled with Red Hat Quay can be found on the Red Hat Ecosystem Catalog . Official upstream containers are packaged and released as a under the Clair project on Quay.io . The latest tag tracks the Git development branch. Version tags are built from the corresponding release. 1.2. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 1.2.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 1.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 1.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High | [
"updaters: config: rhel: ignore_unpatched: false"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-vulnerability-scanner |
Chapter 29. Insert Field Action | Chapter 29. Insert Field Action Adds a custom field with a constant value to the message in transit 29.1. Configuration Options The following table summarizes the configuration options available for the insert-field-action Kamelet: Property Name Description Type Default Example field * Field The name of the field to be added string value * Value The value of the field string Note Fields marked with an asterisk (*) are mandatory. 29.2. Dependencies At runtime, the insert-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 29.3. Usage This section describes how you can use the insert-field-action . 29.3.1. Knative Action You can use the insert-field-action Kamelet as an intermediate step in a Knative binding. insert-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 29.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 29.3.1.2. Procedure for using the cluster CLI Save the insert-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f insert-field-action-binding.yaml 29.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 29.3.2. Kafka Action You can use the insert-field-action Kamelet as an intermediate step in a Kafka binding. insert-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 29.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 29.3.2.2. Procedure for using the cluster CLI Save the insert-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f insert-field-action-binding.yaml 29.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 29.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/insert-field-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"foo\":\"John\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: \"The Field\" value: \"The Value\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f insert-field-action-binding.yaml",
"kamel bind --name insert-field-action-binding timer-source?message='{\"foo\":\"John\"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"foo\":\"John\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: \"The Field\" value: \"The Value\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f insert-field-action-binding.yaml",
"kamel bind --name insert-field-action-binding timer-source?message='{\"foo\":\"John\"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/insert-field-action |
Chapter 8. Using CPU Manager and Topology Manager | Chapter 8. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 8.1. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 8.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 8.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 8.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. | [
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc new-project <project_name>",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2",
"NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m",
"oc debug node/perf-node.example.com",
"sh-4.2# systemctl status | grep -B5 pause",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope",
"for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus",
"oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/using-cpu-manager |
Chapter 1. High Availability Add-On Overview | Chapter 1. High Availability Add-On Overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. The following sections provide a high-level description of the components and functions of the High Availability Add-On: Section 1.1, "Cluster Basics" Section 1.2, "High Availability Add-On Introduction" Section 1.4, "Pacemaker Architecture Components" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2 (part of the Resilient Storage Add-On). High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its High Availability Service Management component, Pacemaker . Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Load balancing is available with the Load Balancer Add-On. High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. Additionally, the Red Hat Enterprise Linux High Availability Add-On contains support for configuring and managing high availability servers only . It does not support high-performance clusters. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/ch-introduction-haao |
Chapter 3. Build [build.openshift.io/v1] | Chapter 3. Build [build.openshift.io/v1] Description Build encapsulates the inputs needed to produce a new deployable image, as well as the status of the execution and a reference to the Pod which executed the build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BuildSpec has the information to represent a build and also additional information about a build status object BuildStatus contains the status of a build 3.1.1. .spec Description BuildSpec has the information to represent a build and also additional information about a build Type object Required strategy Property Type Description completionDeadlineSeconds integer completionDeadlineSeconds is an optional duration in seconds, counted from the time when a build pod gets scheduled in the system, that the build may be active on a node before the system actively tries to terminate the build; value must be positive integer mountTrustedCA boolean mountTrustedCA bind mounts the cluster's trusted certificate authorities, as defined in the cluster's proxy configuration, into the build. This lets processes within a build trust components signed by custom PKI certificate authorities, such as private artifact repositories and HTTPS proxies. When this field is set to true, the contents of /etc/pki/ca-trust within the build are managed by the build container, and any changes to this directory or its subdirectories (for example - within a Dockerfile RUN instruction) are not persisted in the build's output image. nodeSelector object (string) nodeSelector is a selector which must be true for the build pod to fit on a node If nil, it can be overridden by default build nodeselector values for the cluster. If set to an empty map or a map with any values, default build nodeselector values are ignored. output object BuildOutput is input to a build strategy and describes the container image that the strategy should produce. postCommit object A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . 1. Shell script: "postCommit": { "script": "rake test --verbose", } The above is a convenient form which is equivalent to: "postCommit": { "command": ["/bin/sh", "-ic"], "args": ["rake test --verbose"] } 2. A command as the image entrypoint: "postCommit": { "commit": ["rake", "test", "--verbose"] } Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint . 3. Pass arguments to the default entrypoint: "postCommit": { "args": ["rake", "test", "--verbose"] } This form is only useful if the image entrypoint can handle arguments. 4. Shell script with arguments: "postCommit": { "script": "rake test USD1", "args": ["--verbose"] } This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be "/bin/sh" and USD1, USD2, etc, are the positional arguments from Args. 5. Command with arguments: "postCommit": { "command": ["rake", "test"], "args": ["--verbose"] } This form is equivalent to appending the arguments to the Command slice. It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. resources ResourceRequirements resources computes resource requirements to execute the build. revision object SourceRevision is the revision or commit information from the source for the build serviceAccount string serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount source object BuildSource is the SCM used for the build. strategy object BuildStrategy contains the details of how to perform a build. triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. 3.1.2. .spec.output Description BuildOutput is input to a build strategy and describes the container image that the strategy should produce. Type object Property Type Description imageLabels array imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. imageLabels[] object ImageLabel represents a label applied to the resulting image. pushSecret LocalObjectReference PushSecret is the name of a Secret that would be used for setting up the authentication for executing the Docker push to authentication enabled Docker Registry (or Docker Hub). to ObjectReference to defines an optional location to push the output of this build to. Kind must be one of 'ImageStreamTag' or 'DockerImage'. This value will be used to look up a container image repository to push to. In the case of an ImageStreamTag, the ImageStreamTag will be looked for in the namespace of the build unless Namespace is specified. 3.1.3. .spec.output.imageLabels Description imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. Type array 3.1.4. .spec.output.imageLabels[] Description ImageLabel represents a label applied to the resulting image. Type object Required name Property Type Description name string name defines the name of the label. It must have non-zero length. value string value defines the literal value of the label. 3.1.5. .spec.postCommit Description A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . Shell script: A command as the image entrypoint: Pass arguments to the default entrypoint: Shell script with arguments: Command with arguments: It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. Type object Property Type Description args array (string) args is a list of arguments that are provided to either Command, Script or the container image's default entrypoint. The arguments are placed immediately after the command to be run. command array (string) command is the command to run. It may not be specified with Script. This might be needed if the image doesn't have /bin/sh , or if you do not want to use a shell. In all other cases, using Script might be more convenient. script string script is a shell script to be run with /bin/sh -ic . It may not be specified with Command. Use Script when a shell script is appropriate to execute the post build hook, for example for running unit tests with rake test . If you need control over the image entrypoint, or if the image does not have /bin/sh , use Command and/or Args. The -i flag is needed to support CentOS and RHEL images that use Software Collections (SCL), in order to have the appropriate collections enabled in the shell. E.g., in the Ruby image, this is necessary to make ruby , bundle and other binaries available in the PATH. 3.1.6. .spec.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.7. .spec.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.8. .spec.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.9. .spec.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.10. .spec.source Description BuildSource is the SCM used for the build. Type object Property Type Description binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. configMaps array configMaps represents a list of configMaps and their destinations that will be used for the build. configMaps[] object ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. contextDir string contextDir specifies the sub-directory where the source code for the application exists. This allows to have buildable sources in directory other than root of repository. dockerfile string dockerfile is the raw contents of a Dockerfile which should be built. When this option is specified, the FROM may be modified based on your strategy base image and additional ENV stanzas from your strategy environment will be added after the FROM, but before the rest of your Dockerfile stanzas. The Dockerfile source type may be used with other options like git - in those cases the Git repo will have any innate Dockerfile replaced in the context dir. git object GitBuildSource defines the parameters of a Git SCM images array images describes a set of images to be used to provide source for the build images[] object ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). secrets array secrets represents a list of secrets and their destinations that will be used only for the build. secrets[] object SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. sourceSecret LocalObjectReference sourceSecret is the name of a Secret that would be used for setting up the authentication for cloning private repository. The secret contains valid credentials for remote repository, where the data's key represent the authentication method to be used and value is the base64 encoded credentials. Supported auth methods are: ssh-privatekey. type string type of build input to accept 3.1.11. .spec.source.binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 3.1.12. .spec.source.configMaps Description configMaps represents a list of configMaps and their destinations that will be used for the build. Type array 3.1.13. .spec.source.configMaps[] Description ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. Type object Required configMap Property Type Description configMap LocalObjectReference configMap is a reference to an existing configmap that you want to use in your build. destinationDir string destinationDir is the directory where the files from the configmap should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. 3.1.14. .spec.source.git Description GitBuildSource defines the parameters of a Git SCM Type object Required uri Property Type Description httpProxy string httpProxy is a proxy used to reach the git repository over http httpsProxy string httpsProxy is a proxy used to reach the git repository over https noProxy string noProxy is the list of domains for which the proxy should not be used ref string ref is the branch/tag/ref to build. uri string uri points to the source that will be built. The structure of the source will depend on the type of build to run 3.1.15. .spec.source.images Description images describes a set of images to be used to provide source for the build Type array 3.1.16. .spec.source.images[] Description ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). Type object Required from Property Type Description as array (string) A list of image names that this source will be used in place of during a multi-stage container image build. For instance, a Dockerfile that uses "COPY --from=nginx:latest" will first check for an image source that has "nginx:latest" in this field before attempting to pull directly. If the Dockerfile does not reference an image source it is ignored. This field and paths may both be set, in which case the contents will be used twice. from ObjectReference from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from. paths array paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. paths[] object ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. pullSecret LocalObjectReference pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set. 3.1.17. .spec.source.images[].paths Description paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. Type array 3.1.18. .spec.source.images[].paths[] Description ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. Type object Required sourcePath destinationDir Property Type Description destinationDir string destinationDir is the relative directory within the build directory where files copied from the image are placed. sourcePath string sourcePath is the absolute path of the file or directory inside the image to copy to the build directory. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination. 3.1.19. .spec.source.secrets Description secrets represents a list of secrets and their destinations that will be used only for the build. Type array 3.1.20. .spec.source.secrets[] Description SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. Type object Required secret Property Type Description destinationDir string destinationDir is the directory where the files from the secret should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. Later, when the script finishes, all files injected will be truncated to zero length. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. secret LocalObjectReference secret is a reference to an existing secret that you want to use in your build. 3.1.21. .spec.strategy Description BuildStrategy contains the details of how to perform a build. Type object Property Type Description customStrategy object CustomBuildStrategy defines input parameters specific to Custom build. dockerStrategy object DockerBuildStrategy defines input parameters specific to container image build. jenkinsPipelineStrategy object JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines sourceStrategy object SourceBuildStrategy defines input parameters specific to an Source build. type string type is the kind of build strategy. 3.1.22. .spec.strategy.customStrategy Description CustomBuildStrategy defines input parameters specific to Custom build. Type object Required from Property Type Description buildAPIVersion string buildAPIVersion is the requested API version for the Build object serialized and passed to the custom builder env array (EnvVar) env contains additional environment variables you want to pass into a builder container. exposeDockerSocket boolean exposeDockerSocket will allow running Docker commands (and build container images) from inside the container. forcePull boolean forcePull describes if the controller should configure the build pod to always pull the images for the builder or only pull if it is not present locally from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries secrets array secrets is a list of additional secrets that will be included in the build pod secrets[] object SecretSpec specifies a secret to be included in a build pod and its corresponding mount point 3.1.23. .spec.strategy.customStrategy.secrets Description secrets is a list of additional secrets that will be included in the build pod Type array 3.1.24. .spec.strategy.customStrategy.secrets[] Description SecretSpec specifies a secret to be included in a build pod and its corresponding mount point Type object Required secretSource mountPath Property Type Description mountPath string mountPath is the path at which to mount the secret secretSource LocalObjectReference secretSource is a reference to the secret 3.1.25. .spec.strategy.dockerStrategy Description DockerBuildStrategy defines input parameters specific to container image build. Type object Property Type Description buildArgs array (EnvVar) buildArgs contains build arguments that will be resolved in the Dockerfile. See https://docs.docker.com/engine/reference/builder/#/arg for more details. NOTE: Only the 'name' and 'value' fields are supported. Any settings on the 'valueFrom' field are ignored. dockerfilePath string dockerfilePath is the path of the Dockerfile that will be used to build the container image, relative to the root of the context (contextDir). Defaults to Dockerfile if unset. env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is a reference to an DockerImage, ImageStreamTag, or ImageStreamImage which overrides the FROM image in the Dockerfile for the build. If the Dockerfile uses multi-stage builds, this will replace the image in the last FROM directive of the file. imageOptimizationPolicy string imageOptimizationPolicy describes what optimizations the system can use when building images to reduce the final size or time spent building the image. The default policy is 'None' which means the final build image will be equivalent to an image created by the container image build API. The experimental policy 'SkipLayers' will avoid commiting new layers in between each image step, and will fail if the Dockerfile cannot provide compatibility with the 'None' policy. An additional experimental policy 'SkipLayersAndWarn' is the same as 'SkipLayers' but simply warns if compatibility cannot be preserved. noCache boolean noCache if set to true indicates that the container image build must be executed with the --no-cache=true flag pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.26. .spec.strategy.dockerStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.27. .spec.strategy.dockerStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.28. .spec.strategy.dockerStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.29. .spec.strategy.dockerStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.30. .spec.strategy.dockerStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.31. .spec.strategy.jenkinsPipelineStrategy Description JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines Type object Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a build pipeline. jenkinsfile string Jenkinsfile defines the optional raw contents of a Jenkinsfile which defines a Jenkins pipeline build. jenkinsfilePath string JenkinsfilePath is the optional path of the Jenkinsfile that will be used to configure the pipeline relative to the root of the context (contextDir). If both JenkinsfilePath & Jenkinsfile are both not specified, this defaults to Jenkinsfile in the root of the specified contextDir. 3.1.32. .spec.strategy.sourceStrategy Description SourceBuildStrategy defines input parameters specific to an Source build. Type object Required from Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled incremental boolean incremental flag forces the Source build to do incremental builds if true. pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries scripts string scripts is the location of Source scripts volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.33. .spec.strategy.sourceStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.34. .spec.strategy.sourceStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.35. .spec.strategy.sourceStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.36. .spec.strategy.sourceStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.37. .spec.strategy.sourceStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.38. .spec.triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 3.1.39. .spec.triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 3.1.40. .spec.triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.41. .spec.triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.42. .spec.triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.43. .spec.triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.44. .spec.triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.45. .spec.triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.46. .spec.triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.47. .spec.triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.48. .spec.triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.49. .spec.triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.50. .spec.triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.51. .spec.triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.52. .spec.triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.53. .spec.triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.54. .spec.triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.55. .spec.triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.56. .spec.triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.57. .spec.triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.58. .spec.triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.59. .spec.triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.60. .spec.triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 3.1.61. .status Description BuildStatus contains the status of a build Type object Required phase Property Type Description cancelled boolean cancelled describes if a cancel event was triggered for the build. completionTimestamp Time completionTimestamp is a timestamp representing the server time when this Build was finished, whether that build failed or succeeded. It reflects the time at which the Pod running the Build terminated. It is represented in RFC3339 form and is in UTC. conditions array Conditions represents the latest available observations of a build's current state. conditions[] object BuildCondition describes the state of a build at a certain point. config ObjectReference config is an ObjectReference to the BuildConfig this Build is based on. duration integer duration contains time.Duration object describing build time. logSnippet string logSnippet is the last few lines of the build log. This value is only set for builds that failed. message string message is a human-readable message indicating details about why the build has this status. output object BuildStatusOutput contains the status of the built image. outputDockerImageReference string outputDockerImageReference contains a reference to the container image that will be built by this build. Its value is computed from Build.Spec.Output.To, and should include the registry address, so that it can be used to push and pull the image. phase string phase is the point in the build lifecycle. Possible values are "New", "Pending", "Running", "Complete", "Failed", "Error", and "Cancelled". reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. stages array stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. stages[] object StageInfo contains details about a build stage. startTimestamp Time startTimestamp is a timestamp representing the server time when this Build started running in a Pod. It is represented in RFC3339 form and is in UTC. 3.1.62. .status.conditions Description Conditions represents the latest available observations of a build's current state. Type array 3.1.63. .status.conditions[] Description BuildCondition describes the state of a build at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of build condition. 3.1.64. .status.output Description BuildStatusOutput contains the status of the built image. Type object Property Type Description to object BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. 3.1.65. .status.output.to Description BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. Type object Property Type Description imageDigest string imageDigest is the digest of the built container image. The digest uniquely identifies the image in the registry to which it was pushed. Please note that this field may not always be set even if the push completes successfully - e.g. when the registry returns no digest or returns it in a format that the builder doesn't understand. 3.1.66. .status.stages Description stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. Type array 3.1.67. .status.stages[] Description StageInfo contains details about a build stage. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the stage took to complete in milliseconds. Note: the duration of a stage can exceed the sum of the duration of the steps within the stage as not all actions are accounted for in explicit build steps. name string name is a unique identifier for each build stage that occurs. startTime Time startTime is a timestamp representing the server time when this Stage started. It is represented in RFC3339 form and is in UTC. steps array steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. steps[] object StepInfo contains details about a build step. 3.1.68. .status.stages[].steps Description steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. Type array 3.1.69. .status.stages[].steps[] Description StepInfo contains details about a build step. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the step took to complete in milliseconds. name string name is a unique identifier for each build step. startTime Time startTime is a timestamp representing the server time when this Step started. it is represented in RFC3339 form and is in UTC. 3.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/builds GET : list or watch objects of kind Build /apis/build.openshift.io/v1/watch/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds DELETE : delete collection of Build GET : list or watch objects of kind Build POST : create a Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} GET : watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details PUT : replace details of the specified Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks POST : connect POST requests to webhooks of BuildConfig /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} POST : connect POST requests to webhooks of BuildConfig 3.2.1. /apis/build.openshift.io/v1/builds Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Build Table 3.2. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty 3.2.2. /apis/build.openshift.io/v1/watch/builds Table 3.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/build.openshift.io/v1/namespaces/{namespace}/builds Table 3.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Build Table 3.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.8. Body parameters Parameter Type Description body DeleteOptions schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Build Table 3.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body Build schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 3.2.4. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds Table 3.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} Table 3.18. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Build Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.21. Body parameters Parameter Type Description body DeleteOptions schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 3.23. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 3.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.25. Body parameters Parameter Type Description body Patch schema Table 3.26. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 3.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.28. Body parameters Parameter Type Description body Build schema Table 3.29. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.6. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} Table 3.30. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.7. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details Table 3.33. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.34. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method PUT Description replace details of the specified Build Table 3.35. Body parameters Parameter Type Description body Build schema Table 3.36. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.8. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks Table 3.37. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.38. Global query parameters Parameter Type Description path string Path is the URL path to use for the current proxy request to pod. HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.39. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty 3.2.9. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} Table 3.40. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects path string path to the resource Table 3.41. Global query parameters Parameter Type Description path string Path is the URL path to use for the current proxy request to pod. HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.42. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty | [
"\"postCommit\": { \"script\": \"rake test --verbose\", }",
"The above is a convenient form which is equivalent to:",
"\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }",
"\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }",
"Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.",
"\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }",
"This form is only useful if the image entrypoint can handle arguments.",
"\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }",
"This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.",
"\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }",
"This form is equivalent to appending the arguments to the Command slice."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/build-build-openshift-io-v1 |
Chapter 3. Context Functions | Chapter 3. Context Functions The context functions provide additional information about where an event occurred. These functions can provide information such as a backtrace to where the event occurred and the current register values for the processor. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/context_stp |
Chapter 7. Technology Previews | Chapter 7. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.10. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 7.1. Infrastructure services Socket API for TuneD available as a Technology Preview The socket API for controlling TuneD through a UNIX domain socket is now available as a Technology Preview. The socket API maps one-to-one with the D-Bus API and provides an alternative communication method for cases where D-Bus is not available. By using the socket API, you can control the TuneD daemon to optimize the performance, and change the values of various tuning parameters. The socket API is disabled by default, you can enable it in the tuned-main.conf file. Bugzilla:2113900 7.2. Networking AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. Bugzilla:1633143 [1] XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP hardware offloading. Bugzilla:1889737 Multi-protocol Label Switching for TC available as a Technology Preview The Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism to route traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets based on the labels attached to the packet. With the usage of labels, the MPLS network has the ability to handle packets with particular characteristics. For example, you can add tc filters for managing packets received from specific ports or carrying specific types of traffic, in a consistent way. After packets enter the enterprise network, MPLS routers perform multiple operations on the packets, such as push to add a label, swap to update a label, and pop to remove a label. MPLS allows defining actions locally based on one or multiple labels in RHEL. You can configure routers and set traffic control ( tc ) filters to take appropriate actions on the packets based on the MPLS label stack entry ( lse ) elements, such as label , traffic class , bottom of stack , and time to live . For example, the following command adds a filter to the enp0s1 network interface to match incoming packets having the first label 12323 and the second label 45832 . On matching packets, the following actions are taken: the first MPLS TTL is decremented (packet is dropped if TTL reaches 0) the first MPLS label is changed to 549386 the resulting packet is transmitted over enp0s2 , with destination MAC address 00:00:5E:00:53:01 and source MAC address 00:00:5E:00:53:02 Bugzilla:1814836 [1] , Bugzilla:1856415 act_mpls module available as a Technology Preview The act_mpls module is now available in the kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently. Bugzilla:1839311 [1] The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, a Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. Bugzilla:1906489 7.3. Kernel Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which maintains two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. Bugzilla:1605216 [1] eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which enables creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf(2) manual page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. Bugzilla:1559616 [1] The kexec fast reboot feature is available as a Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. The kexec fast reboot significantly speeds the boot process as you can boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) or firmware first. To use this feature: Load the kexec kernel manually. Reboot for changes to take effect. Note that the kexec fast reboot capability is available with a limited scope of support on RHEL 9 and later releases. Bugzilla:1769727 The accel-config package available as a Technology Preview The accel-config package is now available on Intel EM64T and AMD64 architectures as a Technology Preview. This package helps in controlling and configuring data-streaming accelerator (DSA) sub-system in the Linux Kernel. Also, it configures devices through sysfs (pseudo-filesystem), saves and loads the configuration in the json format. Bugzilla:1843266 [1] 7.4. File systems and storage File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, the file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that provides the capability of DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, a mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. Bugzilla:1627455 [1] OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . Bugzilla:1690207 [1] Stratis is now available as a Technology Preview Stratis is a new local storage manager, which provides managed file systems on top of pools of storage with additional features. It is provided as a Technology Preview. With Stratis, you can perform the following storage tasks: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. For more information, see the Setting up Stratis file systems documentation. RHEL 8.5 updated Stratis to version 2.4.2. For more information, see the Stratis 2.4.2 Release Notes . Jira:RHELPLAN-1212 [1] NVMe/TCP host is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme_tcp.ko kernel module has been added as a Technology Preview. The use of NVMe/TCP as a host is manageable with tools provided by the nvme-cli package. The NVMe/TCP host Technology Preview is included only for testing purposes and is not currently planned for full support. Bugzilla:1696451 [1] Setting up a Samba server on an IdM domain member is provided as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . Jira:RHELPLAN-13195 [1] 7.5. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat OpenStack. Bugzilla:1619620 [1] Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. Bugzilla:1784200 New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now provides the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. Bugzilla:1775847 [1] 7.6. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . Bugzilla:1664719 DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. Bugzilla:1664718 ACME available as a Technology Preview The Automated Certificate Management Environment (ACME) service is now available in Identity Management (IdM) as a Technology Preview. ACME is a protocol for automated identifier validation and certificate issuance. Its goal is to improve security by reducing certificate lifetimes and avoiding manual processes from certificate lifecycle management. In RHEL, the ACME service uses the Red Hat Certificate System (RHCS) PKI ACME responder. The RHCS ACME subsystem is automatically deployed on every certificate authority (CA) server in the IdM deployment, but it does not service requests until the administrator enables it. RHCS uses the acmeIPAServerCert profile when issuing ACME certificates. The validity period of issued certificates is 90 days. Enabling or disabling the ACME service affects the entire IdM deployment. Important It is recommended to enable ACME only in an IdM deployment where all servers are running RHEL 8.4 or later. Earlier RHEL versions do not include the ACME service, which can cause problems in mixed-version deployments. For example, a CA server without ACME can cause client connections to fail, because it uses a different DNS Subject Alternative Name (SAN). Warning Currently, RHCS does not remove expired certificates. Because ACME certificates expire after 90 days, the expired certificates can accumulate and this can affect performance. To enable ACME across the whole IdM deployment, use the ipa-acme-manage enable command: To disable ACME across the whole IdM deployment, use the ipa-acme-manage disable command: To check whether the ACME service is installed and if it is enabled or disabled, use the ipa-acme-manage status command: Bugzilla:1628987 [1] sssd-idp sub-package available as a Technology Preview The sssd-idp sub-package for SSSD contains the oidc_child and krb5 idp plugins, which are client-side components that perform OAuth2 authentication against Identity Management (IdM) servers. This feature is available only with IdM servers on RHEL 8.7 and later. Bugzilla:2065692 SSSD internal krb5 idp plugin available as a Technology Preview The SSSD krb5 idp plugin allows you to authenticate against an external identity provider (IdP) using the OAuth2 protocol. This feature is available only with IdM servers on RHEL 8.7 and later. Bugzilla:2056483 7.7. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is available for the 64-bit ARM architecture as a Technology Preview. You can now connect to the desktop session on a 64-bit ARM server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on 64-bit ARM. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27394 [1] , Bugzilla:1667516, Bugzilla:1724302 , Bugzilla:1667225 GNOME for the IBM Z architecture available as a Technology Preview The GNOME desktop environment is available for the IBM Z architecture as a Technology Preview. You can now connect to the desktop session on an IBM Z server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on IBM Z. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27737 [1] 7.8. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. Bugzilla:1698565 [1] 7.9. Virtualization KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel and AMD systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization Bugzilla:1519039 [1] AMD SEV and SEV-ES for KVM virtual machines As a Technology Preview, RHEL 8 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 8 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation. Bugzilla:1501618 [1] , Bugzilla:1501607, Jira:RHELPLAN-7677 Intel vGPU available as a Technology Preview As a Technology Preview, it is possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, it is possible to enable a VNC console operated by Intel vGPU. By enabling it, users can connect to a VNC console of the VM and see the VM's desktop hosted by Intel vGPU. However, this currently only works for RHEL guest operating systems. Note that this feature is deprecated and will be removed entirely in a future RHEL major release. Bugzilla:1528684 [1] Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, IBM POWER, and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. Jira:RHELPLAN-14047 [1] , Jira:RHELPLAN-24437 Technology Preview: Select Intel network adapters now provide SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters that are supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently provided with Microsoft Windows Server 2016 and later. Bugzilla:1348508 [1] Intel TDX in RHEL guests As a Technology Preview, the Intel Trust Domain Extension (TDX) feature can now be used in RHEL 8.8 and later guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). Note, however, that TDX currently does not work with kdump , and enabling TDX will cause kdump to fail on the VM. Bugzilla:1836977 [1] Sharing files between hosts and VMs using virtiofs As a Technology Preview, RHEL 8 now provides the virtio file system ( virtiofs ). Using virtiofs , you can efficiently share files between your host system and its virtual machines (VM). Bugzilla:1741615 [1] 7.10. RHEL in cloud environments RHEL confidential VMs are now available on Azure as a Technology Preview With the updated RHEL kernel, you can now create and run confidential virtual machines (VMs) on Microsoft Azure as a Technology Preview. However, it is not yet possible to encrypt RHEL confidential VM images during boot on Azure. Jira:RHELPLAN-122316 [1] 7.11. Containers The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. Jira:RHELDOCS-16861 [1] Building multi-architecture images is available as a Technology Preview The podman farm build command, which you can use to create multi-architecture container images, is available as a Technology Preview. A farm is a group of machines that have a UNIX podman socket running in them. The nodes in the farm can have different machines of different architectures. The podman farm build command is faster than the podman build --arch --platform command. You can use podman farm build to perform the following actions: Build an image on all nodes in a farm. Bundle nodes up into a manifest list. Execute the podman build command on all the farm nodes. Push the images to the registry specified by using the --tag option. Locally create a manifest list. Push the manifest list to the registry. The manifest list contains one image per native architecture type that is present in the farm. Jira:RHELPLAN-154435 [1] | [
"tc filter add dev enp0s1 ingress protocol mpls_uc flower mpls lse depth 1 label 12323 lse depth 2 label 45832 action mpls dec_ttl pipe action mpls modify label 549386 pipe action pedit ex munge eth dst set 00:00:5E:00:53:01 pipe action pedit ex munge eth src set 00:00:5E:00:53:02 pipe action mirred egress redirect dev enp0s2",
"xfs_info /mount-point | grep ftype",
"ipa-acme-manage enable The ipa-acme-manage command was successful",
"ipa-acme-manage disable The ipa-acme-manage command was successful",
"ipa-acme-manage status ACME is enabled The ipa-acme-manage command was successful"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/technology-previews |
Chapter 13. High availability and clusters | Chapter 13. High availability and clusters The following chapter contains the most notable changes to high availability and clusters between RHEL 8 and RHEL 9. 13.1. Notable changes to high availability and clusters pcs commands that support the clufter tool have been removed The pcs commands that support the clufter tool for analyzing cluster configuration formats have been removed. The following commands have been removed: pcs config import-cman for importing CMAN / RHEL6 HA cluster configuration pcs config export for exporting cluster configuration to a list of pcs commands which recreate the same cluster pcs suppport for OCF Resource Agent API 1.1 standard The pcs command-line interface now supports OCF 1.1 resource and STONITH agents. As part of the implementation of this support, any agent's metadata must comply with the OCF schema, whether the agent is an OCF 1.0 or OCF 1.1 agent. If an agent's metadata does not comply with the OCF schema, pcs considers the agent invalid and will not create or update a resource of the agent unless the --force option is specified. The pcsd Web UI and pcs commands for listing agents now omit agents with invalid metadata from the listing. New pcs parsing requires meta keyword when specifying clone meta attributes To ensure consistency in the pcs command format, configuring clone meta attributes with the pcs resource clone , pcs resource promotable , and pcs resource create commands without specifying the meta keyword is now deprecated. Previously, the meta keyword was ignored in the pcs resource clone and pcs resource promotable commands. In the pcs resource create command, however, the meta attributes specified after the meta keyword when it followed the clone keyword were assigned to the resource rather than to the clone. With this updated parsing algorithm, meta attributes specified after the meta keyword when it follows the clone keyword are assigned to the clone. To maintain compatibility with existing scripts which rely on the older format, you must specify the --future command option to enable this new argument processing when creating a cloned resource with the pcs resource create command. The following command now creates a resource with the meta attribute mv=v1 and a clone with the meta attribute mv=v2 : pcs resource create dummy1 ocf:pacemaker:Dummy meta m1=v1 clone meta m2=v2 --future | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_high-availability-and-clusters_considerations-in-adopting-rhel-9 |
23.2. Using the Maintenance Boot Modes | 23.2. Using the Maintenance Boot Modes 23.2.1. Loading the Memory (RAM) Testing Mode Faults in memory (RAM) modules can cause your system to freeze or crash unpredictably. In certain situations, memory faults might only cause errors with particular combinations of software. For this reason, you should test the memory of a computer before you install Red Hat Enterprise Linux for the first time, even if it has previously run other operating systems. Red Hat Enterprise Linux includes the Memtest86+ memory testing application. To start memory testing mode, choose Troubleshooting > Memory test at the boot menu. Testing will begin immediately. By default, Memtest86+ carries out ten tests in every pass; a different configuration can be specified by accessing the configuration screen using the c key. After the first pass completes, a message will appear at the bottom informing you of the current status, and another pass will start automatically. Note Memtest86+ only works on BIOS systems. Support for UEFI systems is currently unavailable. Figure 23.1. Memory Check Using Memtest86+ The main screen displayed while testing is in progress is divided into three main areas: The upper left corner shows information about your system's memory configuration - the amount of detected memory and processor cache and their throughputs and processor and chipset information. This information is detected when Memtest86+ starts. The upper right corner displays information about the tests - progress of the current pass and the currently running test in that pass as well as a description of the test. The central part of the screen is used to display information about the entire set of tests from the moment when the tool has started, such as the total time, the number of completed passes, number of detected errors and your test selection. On some systems, detailed information about the installed memory (such as the number of installed modules, their manufacturer, frequency and latency) will be also displayed here. After the each pass completes, a short summary will appear in this location. For example: If Memtest86+ detects an error, it will also be displayed in this area and highlighted red. The message will include detailed information such as which test detected a problem, the memory location which is failing, and others. In most cases, a single successful pass (that is, a single run of all 10 tests) is sufficient to verify that your RAM is in good condition. In some rare circumstances, however, errors that went undetected on the first pass might appear on subsequent passes. To perform a thorough test on an important system, leave the tests running overnight or even for a few days in order to complete multiple passes. Note The amount of time it takes to complete a single full pass of Memtest86+ varies depending on your system's configuration (notably the RAM size and speed). For example, on a system with 2 GiB of DDR2 memory at 667 MHz, a single pass will take roughly 20 minutes to complete. To halt the tests and reboot your computer, press the Esc key at any time. For more information about using Memtest86+ , see the official website at http://www.memtest.org/ . A README file is also located in /usr/share/doc/memtest86+- version / on Red Hat Enterprise Linux systems with the memtest86+ package installed. 23.2.2. Verifying Boot Media You can test the integrity of an ISO-based installation source before using it to install Red Hat Enterprise Linux. These sources include DVD, and ISO images stored on a hard drive or NFS server. Verifying that the ISO images are intact before you attempt an installation helps to avoid problems that are often encountered during installation. To test the checksum integrity of an ISO image, append the rd.live.check to the boot loader command line. Note that this option is used automatically if you select the default installation option from the boot menu ( Test this media & install Red Hat Enterprise Linux 7.0 ). 23.2.3. Booting Your Computer in Rescue Mode You can boot a command-line Linux system from an installation disc without actually installing Red Hat Enterprise Linux on the computer. This enables you to use the utilities and functions of a running Linux system to modify or repair already installed operating systems. To load the rescue system with the installation disk or USB drive, choose Rescue a Red Hat Enterprise Linux system from the Troubleshooting submenu in the boot menu, or use the inst.rescue boot option. Specify the language, keyboard layout and network settings for the rescue system with the screens that follow. The final setup screen configures access to the existing system on your computer. By default, rescue mode attaches an existing operating system to the rescue system under the directory /mnt/sysimage/ . For additional information about rescue mode and other maintenance modes, see Chapter 32, Basic System Recovery . | [
"** Pass complete, no errors, press Esc to exit **"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-boot-options-maintenance |
Preface | Preface Important Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes is a technology preview feature. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/pr01 |
Configuring | Configuring Red Hat Advanced Cluster Security for Kubernetes 4.6 Configuring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | [
"-----BEGIN CERTIFICATE----- MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G l4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END CERTIFICATE-----",
"-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>",
"central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f values-private.yaml",
"roxctl central generate --default-tls-cert \"cert.pem\" --default-tls-key \"key.pem\"",
"Enter PEM cert bundle file (optional): <cert.pem> Enter PEM private key file (optional): <key.pem> Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): openshift",
"-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>",
"central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----",
"helm upgrade -n stackrox --create-namespace stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f values-private.yaml",
"oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -",
"oc delete secret central-default-tls-cert",
"oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1",
"./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1",
"./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u",
"oc -n stackrox deploy/sensor -c sensor -- kill 1",
"kubectl -n stackrox deploy/sensor -c sensor -- kill 1",
"oc -n stackrox delete pod -lapp=sensor",
"kubectl -n stackrox delete pod -lapp=sensor",
"chmod +x ca-setup.sh",
"./ca-setup.sh -f <certificate>",
"./ca-setup.sh -d <directory_name>",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"oc delete pod -n stackrox -l app=scanner",
"kubectl delete pod -n stackrox -l app=scanner",
"./ca-setup-sensor.sh -d ./additional-cas/",
"oc apply -f <secret_file.yaml>",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"oc apply -f <secret_file.yaml>",
"oc delete pod -n stackrox -l app=scanner; oc -n stackrox delete pod -l app=scanner-db",
"kubectl delete pod -n stackrox -l app=scanner; kubectl -n stackrox delete pod -l app=scanner-db",
"roxctl -e <endpoint> -p <admin_password> central init-bundles generate --output-secrets <bundle_name> init-bundle.yaml",
"oc -n stackrox apply -f <init-bundle.yaml>",
"docker tag registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 <your_registry>/rhacs-main-rhel8:4.6.3",
"docker tag registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.3 <your_registry>/other-name:latest",
"docker login registry.redhat.io",
"docker pull <image>",
"docker tag <image> <new_image>",
"docker push <new_image>",
"Enter main image to use (if unset, the default will be used): <your_registry>/rhacs-main-rhel8:4.6.3",
"Enter Scanner DB image to use (if unset, the default will be used): <your_registry>/rhacs-scanner-db-rhel8:4.6.3",
"Enter Scanner image to use (if unset, the default will be used): <your_registry>/rhacs-scanner-rhel8:4.6.3",
"Enter whether to run StackRox in offline mode, which avoids reaching out to the internet (default: \"false\"): true",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl scanner upload-db -e \"USDROX_CENTRAL_ADDRESS\" --scanner-db-file=<compressed_scanner_definitions.zip>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl scanner upload-db -p <your_administrator_password> -e \"USDROX_CENTRAL_ADDRESS\" --scanner-db-file=<compressed_scanner_definitions.zip>",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl collector support-packages upload <package_file> -e \"USDROX_CENTRAL_ADDRESS\"",
"roxctl central generate interactive --plaintext-endpoints=<endpoints_spec> 1",
"CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 '",
"oc -n stackrox patch deploy/central -p \"USDCENTRAL_PLAINTEXT_PATCH\"",
"oc -n stackrox get secret proxy-config -o go-template='{{index .data \"config.yaml\" | base64decode}}{{\"\\n\"}}' > /tmp/proxy-config.yaml",
"oc -n stackrox create secret generic proxy-config --from-file=config.yaml=/tmp/proxy-config.yaml -o yaml --dry-run | oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | oc apply -f -",
"apiVersion: v1 kind: Secret metadata: namespace: stackrox name: proxy-config type: Opaque stringData: config.yaml: |- 1 # # NOTE: Both central and scanner should be restarted if this secret is changed. # # While it is possible that some components will pick up the new proxy configuration # # without a restart, it cannot be guaranteed that this will apply to every possible # # integration etc. # url: http://proxy.name:port 2 # username: username 3 # password: password 4 # # If the following value is set to true, the proxy wil NOT be excluded for the default hosts: # # - *.stackrox, *.stackrox.svc # # - localhost, localhost.localdomain, 127.0.0.0/8, ::1 # # - *.local # omitDefaultExcludes: false # excludes: # hostnames (may include * components) for which you do not 5 # # want to use a proxy, like in-cluster repositories. # - some.domain # # The following configuration sections allow specifying a different proxy to be used for HTTP(S) connections. # # If they are omitted, the above configuration is used for HTTP(S) connections as well as TCP connections. # # If only the `http` section is given, it will be used for HTTPS connections as well. # # Note: in most cases, a single, global proxy configuration is sufficient. # http: # url: http://http-proxy.name:port 6 # username: username 7 # password: password 8 # https: # url: http://https-proxy.name:port 9 # username: username 10 # password: password 11",
"export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics",
"export ROX_API_TOKEN= <api_token>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics",
"Sample endpoints.yaml configuration for Central. # # CAREFUL: If the following line is uncommented, do not expose the default endpoint on port 8443 by default. # This will break normal operation. disableDefault: true # if true, do not serve on :8443 1 endpoints: 2 # Serve plaintext HTTP only on port 8080 - listen: \":8080\" 3 # Backend protocols, possible values are 'http' and 'grpc'. If unset or empty, assume both. protocols: 4 - http tls: 5 # Disable TLS. If this is not specified, assume TLS is enabled. disable: true 6 # Serve HTTP and gRPC for sensors only on port 8444 - listen: \":8444\" 7 tls: 8 # Which TLS certificates to serve, possible values are 'service' (For service certificates that Red Hat Advanced Cluster Security for Kubernetes generates) # and 'default' (user-configured default TLS certificate). If unset or empty, assume both. serverCerts: 9 - default - service # Client authentication settings. clientAuth: 10 # Enforce TLS client authentication. If unset, do not enforce, only request certificates # opportunistically. required: true 11 # Which TLS client CAs to serve, possible values are 'service' (CA for service # certificates that Red Hat Advanced Cluster Security for Kubernetes generates) and 'user' (CAs for PKI auth providers). If unset or empty, assume both. certAuthorities: 12 # if not set, assume [\"user\", \"service\"] - service",
"oc -n stackrox get cm/central-endpoints -o go-template='{{index .data \"endpoints.yaml\"}}' > <directory_path>/central_endpoints.yaml",
"oc -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | label -f - --local -o yaml app.kubernetes.io/name=stackrox | apply -f -",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"oc -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory_path>/allow-ext-to-central-custom-port.yaml",
"monitoring: openshift: enabled: false",
"monitoring.openshift.enabled: false",
"central.exposeMonitoring: true scanner.exposeMonitoring: true",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1",
"oc apply -f servicemonitor.yaml 1",
"oc get servicemonitor --namespace stackrox 1",
"{ \"headers\": { \"Accept-Encoding\": [ \"gzip\" ], \"Content-Length\": [ \"586\" ], \"Content-Type\": [ \"application/json\" ], \"User-Agent\": [ \"Go-http-client/1.1\" ] }, \"data\": { \"audit\": { \"interaction\": \"CREATE\", \"method\": \"UI\", \"request\": { \"endpoint\": \"/v1/notifiers\", \"method\": \"POST\", \"source\": { \"requestAddr\": \"10.131.0.7:58276\", \"xForwardedFor\": \"8.8.8.8\", }, \"sourceIp\": \"8.8.8.8\", \"payload\": { \"@type\": \"storage.Notifier\", \"enabled\": true, \"generic\": { \"auditLoggingEnabled\": true, \"endpoint\": \"http://samplewebhookserver.com:8080\" }, \"id\": \"b53232ee-b13e-47e0-b077-1e383c84aa07\", \"name\": \"Webhook\", \"type\": \"generic\", \"uiEndpoint\": \"https://localhost:8000\" } }, \"status\": \"REQUEST_SUCCEEDED\", \"time\": \"2019-05-28T16:07:05.500171300Z\", \"user\": { \"friendlyName\": \"John Doe\", \"role\": { \"globalAccess\": \"READ_WRITE_ACCESS\", \"name\": \"Admin\" }, \"username\": \"[email protected]\" } } } }",
"Warn: API Token [token name] (ID [token ID]) will expire in less than X days.",
"roxctl declarative-config create permission-set --name=\"restricted\" --description=\"Restriction permission set that only allows access to Administration and Access resources\" --resource-with-access=Administration=READ_WRITE_ACCESS --resource-with-access=Access=READ_ACCESS > permission-set.yaml",
"roxctl declarative-config create role --name=\"restricted\" --description=\"Restricted role that only allows access to Administration and Access\" --permission-set=\"restricted\" --access-scope=\"Unrestricted\" > role.yaml",
"kubectl create configmap declarative-configurations \\ 1 --from-file permission-set.yaml --from-file role.yaml -o yaml --namespace=stackrox > declarative-configs.yaml",
"kubectl apply -f declarative-configs.yaml 1",
"name: A sample auth provider minimumRole: Analyst 1 uiEndpoint: central.custom-domain.com:443 2 extraUIEndpoints: 3 - central-alt.custom-domain.com:443 groups: 4 - key: email 5 value: [email protected] role: Admin 6 - key: groups value: reviewers role: Analyst requiredAttributes: 7 - key: org_id value: \"12345\" claimMappings: 8 - path: org_id value: my_org_id oidc: 9 issuer: sample.issuer.com 10 mode: auto 11 clientID: CLIENT_ID clientSecret: CLIENT_SECRET clientSecret: CLIENT_SECRET iap: 12 audience: audience saml: 13 spIssuer: sample.issuer.com metadataURL: sample.provider.com/metadata saml: 14 spIssuer: sample.issuer.com cert: | 15 ssoURL: saml.provider.com idpIssuer: idp.issuer.com userpki: certificateAuthorities: | 16 certificate 17 openshift: 18 enable: true",
"name: A sample permission set description: A sample permission set created declaratively resources: - resource: Integration 1 access: READ_ACCESS 2 - resource: Administration access: READ_WRITE_ACCESS",
"name: A sample access scope description: A sample access scope created declaratively rules: included: - cluster: secured-cluster-A 1 namespaces: - namespaceA - cluster: secured-cluster-B 2 clusterLabelSelectors: - requirements: - requirements: - key: kubernetes.io/metadata.name operator: IN 3 values: - production - staging - environment",
"name: A sample role description: A sample role created declaratively permissionSet: A sample permission set 1 accessScope: Unrestricted 2"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html-single/configuring/index |
Chapter 4. Recommended cluster scaling practices | Chapter 4. Recommended cluster scaling practices Important The guidance in this section is only relevant for installations with cloud provider integration. These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set. 4.1. Recommended practices for scaling the cluster When scaling up the cluster to higher node counts: Spread nodes across all of the available zones for higher availability. Scale up by no more than 25 to 50 machines at once. Consider creating new machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large. Note Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. The controller might not be able to create the machines if the replicas in the machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits and excessive queries might lead to machine creation failures due to cloud platform limitations. Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines. Note When scaling large and dense clusters to lower node counts, it might take large amounts of time as the process involves draining or evicting the objects running on the nodes being terminated in parallel. Also, the client might start to throttle the requests if there are too many objects to evict. The default client QPS and burst rates are currently set to 5 and 10 respectively and they cannot be modified in OpenShift Container Platform. 4.2. Modifying a machine set To make changes to a machine set, edit the MachineSet YAML. Then, remove all machines associated with the machine set by deleting each machine or scaling down the machine set to 0 replicas. Then, scale the replicas back to the desired number. Changes you make to a machine set do not affect existing machines. If you need to scale a machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure Edit the machine set: USD oc edit machineset <machineset> -n openshift-machine-api Scale down the machine set to 0 : USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The new machines contain changes you made to the machine set. 4.3. About machine health checks Machine health checks automatically repair unhealthy machines in a particular machine pool. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. Note You cannot apply a machine health check to a machine with the master role. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 4.3.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. Control plane machines are not currently supported and are not remediated if they are unhealthy. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 4.4. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 4.4.1. Short-circuiting machine health check remediation Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 4.4.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 4.4.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 4.5. Creating a MachineHealthCheck resource You can create a MachineHealthCheck resource for all MachineSets in your cluster. You should not create a MachineHealthCheck resource that targets control plane machines. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml | [
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/recommended-cluster-scaling-practices |
Chapter 11. Configuring an Ingress Controller for manual DNS Management | Chapter 11. Configuring an Ingress Controller for manual DNS Management As a cluster administrator, when you create an Ingress Controller, the Operator manages the DNS records automatically. This has some limitations when the required DNS zone is different from the cluster DNS zone or when the DNS zone is hosted outside the cloud provider. As a cluster administrator, you can configure an Ingress Controller to stop automatic DNS management and start manual DNS management. Set dnsManagementPolicy to specify when it should be automatically or manually managed. When you change an Ingress Controller from Managed to Unmanaged DNS management policy, the Operator does not clean up the wildcard DNS record provisioned on the cloud. When you change an Ingress Controller from Unmanaged to Managed DNS management policy, the Operator attempts to create the DNS record on the cloud provider if it does not exist or updates the DNS record if it already exists. Important When you set dnsManagementPolicy to unmanaged , you have to manually manage the lifecycle of the wildcard DNS record on the cloud provider. 11.1. Managed DNS management policy The Managed DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is automatically managed by the Operator. 11.2. Unmanaged DNS management policy The Unmanaged DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is not automatically managed, instead it becomes the responsibility of the cluster administrator. Note On the AWS cloud platform, if the domain on the Ingress Controller does not match with dnsConfig.Spec.BaseDomain then the DNS management policy is automatically set to Unmanaged . 11.3. Creating a custom Ingress Controller with the Unmanaged DNS management policy As a cluster administrator, you can create a new custom Ingress Controller with the Unmanaged DNS management policy. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a custom resource (CR) file named sample-ingress.yaml containing the following: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4 1 Specify the <name> with a name for the IngressController object. 2 Specify the domain based on the DNS record that was created as a prerequisite. 3 Specify the scope as External to expose the load balancer externally. 4 dnsManagementPolicy indicates if the Ingress Controller is managing the lifecycle of the wildcard DNS record associated with the load balancer. The valid values are Managed and Unmanaged . The default value is Managed . Save the file to apply the changes. oc apply -f <name>.yaml 1 11.4. Modifying an existing Ingress Controller As a cluster administrator, you can modify an existing Ingress Controller to manually manage the DNS record lifecycle. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Modify the chosen IngressController to set dnsManagementPolicy : SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}") oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"USD{SCOPE}"}}}}' Optional: You can delete the associated DNS record in the cloud provider. 11.5. Additional resources Ingress Controller configuration parameters | [
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4",
"apply -f <name>.yaml 1",
"SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath=\"{.status.endpointPublishingStrategy.loadBalancer.scope}\") -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"dnsManagementPolicy\":\"Unmanaged\", \"scope\":\"USD{SCOPE}\"}}}}'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/ingress-controller-dnsmgt |
Chapter 2. Securing the Apache Karaf Container | Chapter 2. Securing the Apache Karaf Container Abstract The Apache Karaf container is secured using JAAS. By defining JAAS realms, you can configure the mechanism used to retrieve user credentials. You can also refine access to the container's administrative interfaces by changing the default roles. 2.1. JAAS Authentication Abstract The Java Authentication and Authorization Service (JAAS) provides a general framework for implementing authentication in a Java application. The implementation of authentication is modular, with individual JAAS modules (or plug-ins) providing the authentication implementations. For background information about JAAS, see the JAAS Reference Guide . 2.1.1. Default JAAS Realm This section describes how to manage user data for the default JAAS realm in a Karaf container. Default JAAS realm The Karaf container has a predefined JAAS realm, the karaf realm, which is used by default to secure all aspects of the container. How to integrate an application with JAAS You can use the karaf realm in your own applications. Simply configure karaf as the name of the JAAS realm that you want to use. Default JAAS login modules When you start the Karaf container for the first time, it is configured to use the karaf default realm. In this default configuration, the karaf realm deploys five JAAS login modules, which are enabled simultaneously. To see the deployed login modules, enter the jaas:realms console command, as follows: Whenever a user attempts to log in, authentication proceeds through the five modules in list order. A flag value for each module specifies whether the module must complete successfully for authentication to succeed. Flag values also specify whether the authentication process stops after a module completes, or whether it proceeds to the module. The Optional flag is set for all five authentication modules. The Optional flag setting causes authentication process to always pass from one module to the , regardless of whether the current module completes successfully. The flag values in the Karaf JAAS realm are hard-coded, and cannot be changed. For more information about flags, see Table 2.1, "Flags for Defining a JAAS Module" . Important In a Karaf container, both the properties login module and the public key login module are enabled. When JAAS authenticates a user, it tries first of all to authenticate the user with the properties login module. If that fails, it then tries to authenticate the user with the public key login module. If that module also fails, an error is raised. 2.1.1.1. Authentication audit logging modules Within the list of default modules in a Karaf container, only the first two modules are used to verify user identity. The remaining modules are used to log the audit trail of successful and failed login attempts. The default realm includes the following audit logging modules: org.apache.karaf.jaas.modules.audit.LogAuditLoginModule This module records information about authentication attempts by using the loggers that are configured for the Pax logging infrastructure in the file etc/org.ops4j.pax.logging.cfg . For more information, see JAAS Log Audit Login Module . org.apache.karaf.jaas.modules.audit.FileAuditLoginModule This module records information about authentication attempts directly to a file that you specify. It does not use the logging infrastructure. For more information, see JAAS File Audit Login Module . org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModule This module tracks authentication attempts using the OSGi Event Admin service. Configuring users in the properties login module The properties login module is used to store username/password credentials in a flat file format. To create a new user in the properties login module, open the InstallDir /etc/users.properties file using a text editor and add a line with the following syntax: For example, to create the jdoe user with password, topsecret , and role, admin , you could create an entry like the following: Where the admin role gives full administrative privileges to the jdoe user. Configuring user groups in the properties login module Instead of (or in addition to) assigning roles directly to users, you also have the option of adding users to user groups in the properties login module. To create a user group in the properties login module, open the InstallDir /etc/users.properties file using a text editor and add a line with the following syntax: For example, to create the admingroup user group with the roles, group and admin , you could create an entry like the following: You could then add the majorclanger user to the admingroup , by creating the following user entry: Configuring the public key login module The public key login module is used to store SSH public key credentials in a flat file format. To create a new user in the public key login module, open the InstallDir /etc/keys.properties file using a text editor and add a line with the following syntax: For example, you can create the jdoe user with the admin role by adding the following entry to the InstallDir /etc/keys.properties file (on a single line): Important Do not insert the entire contents of an id_rsa.pub file here. Insert just the block of symbols which represents the public key itself. Configuring user groups in the public key login module Instead of (or in addition to) assigning roles directly to users, you also have the option of adding users to user groups in the public key login module. To create a user group in the public key login module, open the InstallDir /etc/keys.properties file using a text editor and add a line with the following syntax: For example, to create the admingroup user group with the roles, group and admin , you could create an entry like the following: You could then add the jdoe user to the admingroup , by creating the following user entry: Encrypting the stored passwords By default, passwords are stored in the InstallDir /etc/users.properties file in plaintext format. To protect the passwords in this file, you must set the file permissions of the users.properties file so that it can be read only by administrators. To provide additional protection, you can optionally encrypt the stored passwords using a message digest algorithm. To enable the password encryption feature, edit the InstallDir /etc/org.apache.karaf.jaas.cfg file and set the encryption properties as described in the comments. For example, the following settings would enable basic encryption using the MD5 message digest algorithm: Note The encryption settings in the org.apache.karaf.jaas.cfg file are applied only to the default karaf realm in a Karaf container. They have no effect on a custom realm. For more details about password encryption, see Section 2.1.10, "Encrypting Stored Passwords" . Overriding the default realm If you want to customise the JAAS realm, the most convenient approach to take is to override the default karaf realm by defining a higher ranking karaf realm. This ensures that all of the Red Hat Fuse security components switch to use your custom realm. For details of how to define and deploy custom JAAS realms, see Section 2.1.2, "Defining JAAS Realms" . 2.1.2. Defining JAAS Realms When defining a JAAS realm in the OSGi container, you cannot put the definitions in a conventional JAAS login configuration file. Instead, the OSGi container uses a special jaas:config element for defining JAAS realms in a blueprint configuration file. The JAAS realms defined in this way are made available to all of the application bundles deployed in the container, making it possible to share the JAAS security infrastructure across the whole container. Namespace The jaas:config element is defined in the http://karaf.apache.org/xmlns/jaas/v1.0.0 namespace. When defining a JAAS realm you need to include the line shown in Example 2.1, "JAAS Blueprint Namespace" . Example 2.1. JAAS Blueprint Namespace Configuring a JAAS realm The syntax for the jaas:config element is shown in Example 2.2, "Defining a JAAS Realm in Blueprint XML" . Example 2.2. Defining a JAAS Realm in Blueprint XML The elements are used as follows: jaas:config Defines the JAAS realm. It has the following attributes: name - specifies the name of the JAAS realm. rank - specifies an optional rank for resolving naming conflicts between JAAS realms . When two or more JAAS realms are registered under the same name, the OSGi container always picks the realm instance with the highest rank. If you decide to override the default realm, karaf , you should specify a rank of 100 or more, so that it overrides all of the previously installed karaf realms. jaas:module Defines a JAAS login module in the current realm. jaas:module has the following attributes: className - the fully-qualified class name of a JAAS login module. The specified class must be available from the bundle classloader. flags - determines what happens upon success or failure of the login operation. Table 2.1, "Flags for Defining a JAAS Module" describes the valid values. Table 2.1. Flags for Defining a JAAS Module Value Description required Authentication of this login module must succeed. Always proceed to the login module in this entry, irrespective of success or failure. requisite Authentication of this login module must succeed. If success, proceed to the login module; if failure, return immediately without processing the remaining login modules. sufficient Authentication of this login module is not required to succeed. If success, return immediately without processing the remaining login modules; if failure, proceed to the login module. optional Authentication of this login module is not required to succeed. Always proceed to the login module in this entry, irrespective of success or failure. The contents of a jaas:module element is a space separated list of property settings, which are used to initialize the JAAS login module instance. The specific properties are determined by the JAAS login module and must be put into the proper format. Note You can define multiple login modules in a realm. Converting standard JAAS login properties to XML Red Hat Fuse uses the same properties as a standard Java login configuration file, however Red Hat Fuse requires that they are specified slightly differently. To see how the Red Hat Fuse approach to defining JAAS realms compares with the standard Java login configuration file approach, consider how to convert the login configuration shown in Example 2.3, "Standard JAAS Properties" , which defines the PropertiesLogin realm using the Red Hat Fuse properties login module class, PropertiesLoginModule : Example 2.3. Standard JAAS Properties The equivalent JAAS realm definition, using the jaas:config element in a blueprint file, is shown in Example 2.4, "Blueprint JAAS Properties" . Example 2.4. Blueprint JAAS Properties <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="PropertiesLogin"> <jaas:module flags="required" className="org.apache.activemq.jaas.PropertiesLoginModule"> org.apache.activemq.jaas.properties.user=users.properties org.apache.activemq.jaas.properties.group=groups.properties </jaas:module> </jaas:config> </blueprint> Important Do not use double quotes for JAAS properties in the blueprint configuration. Example Red Hat Fuse also provides an adapter that enables you to store JAAS authentication data in an X.500 server. Example 2.5, "Configuring a JAAS Realm" defines the LDAPLogin realm to use Red Hat Fuse's LDAPLoginModule class, which connects to the LDAP server located at ldap://localhost:10389 . Example 2.5. Configuring a JAAS Realm <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="LDAPLogin" rank="200"> <jaas:module flags="required" className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.username=uid=admin,ou=system connection.password=secret connection.protocol= connection.url = ldap://localhost:10389 user.base.dn = ou=users,ou=system user.filter = (uid=%u) user.search.subtree = true role.base.dn = ou=users,ou=system role.filter = (uid=%u) role.name.attribute = ou role.search.subtree = true authentication = simple </jaas:module> </jaas:config> </blueprint> For a detailed description and example of using the LDAP login module, see Section 2.1.7, "JAAS LDAP Login Module" . 2.1.3. JAAS Properties Login Module The JAAS properties login module stores user data in a flat file format (where the stored passwords can optionally be encrypted using a message digest algorithm). The user data can either be edited directly, using a simple text editor, or managed using the jaas:* console commands. For example, a Karaf container uses the JAAS properties login module by default and stores the associated user data in the InstallDir/etc/users.properties file. Supported credentials The JAAS properties login module authenticates username/password credentials, returning the list of roles associated with the authenticated user. Implementation classes The following classes implement the JAAS properties login module: org.apache.karaf.jaas.modules.properties.PropertiesLoginModule Implements the JAAS login module. org.apache.karaf.jaas.modules.properties.PropertiesBackingEngineFactory Must be exposed as an OSGi service. This service makes it possible for you to manage the user data using the jaas:* console commands from the Apache Karaf shell (see Apache Karaf Console Reference ). Options The JAAS properties login module supports the following options: users Location of the user properties file. Format of the user properties file The user properties file is used to store username, password, and role data for the properties login module. Each user is represented by a single line in the user properties file, where a line has the following form: User groups can also be defined in this file, where each user group is represented by a single line in the following format: For example, you can define the users, bigcheese and guest , and the user groups, admingroup and guestgroup , as follows: Sample Blueprint configuration The following Blueprint configuration shows how to define a new karaf realm using the properties login module, where the default karaf realm is overridden by setting the rank attribute to 200 : Remember to export the BackingEngineFactory bean as an OSGi service, so that the jaas:* console commands can manage the user data. 2.1.4. JAAS OSGi Config Login Module Overview The JAAS OSGi config login modules leverages the OSGi Config Admin Service to store user data. This login module is fairly similar to the JAAS properties login module (for example, the syntax of the user entries is the same), but the mechanism for retrieving user data is based on the OSGi Config Admin Service. The user data can be edited directly by creating a corresponding OSGi configuration file, etc/ PersistentID .cfg or using any method of configuration that is supported by the OSGi Config Admin Service. The jaas:* console commands are not supported, however. Supported credentials The JAAS OSGi config login module authenticates username/password credentials, returning the list of roles associated with the authenticated user. Implementation classes The following classes implement the JAAS OSGi config login module: org.apache.karaf.jaas.modules.osgi.OsgiConfigLoginModule Implements the JAAS login module. Note There is no backing engine factory for the OSGi config login module, which means that this module cannot be managed using the jaas:* console commands. Options The JAAS OSGi config login module supports the following options: pid The persistent ID of the OSGi configuration containing the user data. In the OSGi Config Admin standard, a persistent ID references a set of related configuration properties. Location of the configuration file The location of the configuration file follows the usual convention where the configuration for the persistent ID, PersistentID , is stored in the following file: Format of the configuration file The PersistentID .cfg configuration file is used to store username, password, and role data for the OSGi config login module. Each user is represented by a single line in the configuration file, where a line has the following form: Note User groups are not supported in the JAAS OSGi config login module. Sample Blueprint configuration The following Blueprint configuration shows how to define a new karaf realm using the OSGi config login module, where the default karaf realm is overridden by setting the rank attribute to 200 : In this example, the user data will be stored in the file, InstallDir /etc/org.jboss.example.osgiconfigloginmodule.cfg , and it is not possible to edit the configuration using the jaas:* console commands. 2.1.5. JAAS Public Key Login Module The JAAS public key login module stores user data in a flat file format, which can be edited directly using a simple text editor. The jaas:* console commands are not supported, however. For example, a Karaf container uses the JAAS public key login module by default and stores the associated user data in the InstallDir/etc/keys.properties file. Supported credentials The JAAS public key login module authenticates SSH key credentials. When a user tries to log in, the SSH protocol uses the stored public key to challenge the user. The user must possess the corresponding private key in order to answer the challenge. If login is successful, the login module returns the list of roles associated with the user. Implementation classes The following classes implement the JAAS public key login module: org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule Implements the JAAS login module. Note There is no backing engine factory for the public key login module, which means that this module cannot be managed using the jaas:* console commands. Options The JAAS public key login module supports the following options: users Location of the user properties file for the public key login module. Format of the keys properties file The keys.properties file is used to store username, public key, and role data for the public key login module. Each user is represented by a single line in the keys properties file, where a line has the following form: Where the PublicKey is the public key part of an SSH key pair (typically found in a user's home directory in ~/.ssh/id_rsa.pub in a UNIX system). For example, to create the user jdoe with the admin role, you would create an entry like the following: Important Do not insert the entire contents of the id_rsa.pub file here. Insert just the block of symbols which represents the public key itself. User groups can also be defined in this file, where each user group is represented by a single line in the following format: Sample Blueprint configuration The following Blueprint configuration shows how to define a new karaf realm using the public key login module, where the default karaf realm is overridden by setting the rank attribute to 200 : In this example, the user data will be stored in the file, InstallDir /etc/keys.properties , and it is not possible to edit the configuration using the jaas:* console commands. 2.1.6. JAAS JDBC Login Module Overview The JAAS JDBC login module enables you to store user data in a database back-end, using Java Database Connectivity (JDBC) to connect to the database. Hence, you can use any database that supports JDBC to store your user data. To manage the user data, you can use either the native database client tools or the jaas:* console commands (where the backing engine uses configured SQL queries to perform the relevant database updates). You can combine multiple login modules with each login module providing both the authentication and authorization components. For example, you can combine default PropertiesLoginModule with JDBCLoginModule to ensure access to the system. Note User groups are not supported in the JAAS JDBC login module. Supported credentials The JAAS JDBC Login Module authenticates username/password credentials, returning the list of roles associated with the authenticated user. Implementation classes The following classes implement the JAAS JDBC Login Module: org.apache.karaf.jaas.modules.jdbc.JDBCLoginModule Implements the JAAS login module. org.apache.karaf.jaas.modules.jdbc.JDBCBackingEngineFactory Must be exposed as an OSGi service. This service makes it possible for you to manage the user data using the jaas:* console commands from the Apache Karaf shell (see olink:FMQCommandRef/Consolejaas ). Options The JAAS JDBC login module supports the following options: datasource The JDBC data source, specified either as an OSGi service or as a JNDI name. You can specify a data source's OSGi service using the following syntax: The ServiceInterfaceName is the interface or class that is exported by the data source's OSGi service (usually javax.sql.DataSource ). Because multiple data sources can be exported as OSGi services in a Karaf container, it is usually necessary to specify a filter, ServicePropertiesFilter , to select the particular data source that you want. Filters on OSGi services are applied to the service property settings and follow a syntax that is borrowed from LDAP filter syntax. query.password The SQL query that retrieves the user's password. The query can contain a single question mark character, ? , which is substituted by the username at run time. query.role The SQL query that retrieves the user's roles. The query can contain a single question mark character, ? , which is substituted by the username at run time. insert.user The SQL query that creates a new user entry. The query can contain two question marks, ? , characters: the first question mark is substituted by the username and the second question mark is substituted by the password at run time. insert.role The SQL query that adds a role to a user entry. The query can contain two question marks, ? , characters: the first question mark is substituted by the username and the second question mark is substituted by the role at run time. delete.user The SQL query that deletes a user entry. The query can contain a single question mark character, ? , which is substituted by the username at run time. delete.role The SQL query that deletes a role from a user entry. The query can contain two question marks, ? , characters: the first question mark is substituted by the username and the second question mark is substituted by the role at run time. delete.roles The SQL query that deletes multiple roles from a user entry. The query can contain a single question mark character, ? , which is substituted by the username at run time. Example of setting up a JDBC login module To set up a JDBC login module, perform the following main steps: the section called "Create the database tables" the section called "Create the data source" the section called "Specify the data source as an OSGi service" Create the database tables Before you can set up the JDBC login module, you must set up a users table and a roles table in the backing database to store the user data. For example, the following SQL commands show how to create a suitable users table and roles table: The users table stores username/password data and the roles table associates a username with one or more roles. Create the data source To use a JDBC datasource with the JDBC login module, the correct approach to take is to create a data source instance and export the data source as an OSGi service. The JDBC login module can then access the data source by referencing the exported OSGi service. For example, you could create a MySQL data source instance and expose it as an OSGi service (of javax.sql.DataSource type) using code like the following in a Blueprint file: The preceding Blueprint configuration should be packaged and installed in the Karaf container as an OSGi bundle. Specify the data source as an OSGi service After the data source has been instantiated and exported as an OSGi service, you are ready to configure the JDBC login module. In particular, the datasource option of the JDBC login module can reference the data source's OSGi service using the following syntax: Where javax.sql.DataSource is the interface type of the exported OSGi service and the filter, (osgi.jndi.service.name=jdbc/karafdb) , selects the particular javax.sql.DataSource instance whose osgi.jndi.service.name service property has the value, jdbc/karafdb . For example, you can use the following Blueprint configuration to override the karaf realm with a JDBC login module that references the sample MySQL data source: Note The SQL statements shown in the preceding configuration are in fact the default values of these options. Hence, if you create user and role tables consistent with these SQL statements, you could omit the options settings and rely on the defaults. In addition to creating a JDBCLoginModule, the preceding Blueprint configuration also instantiates and exports a JDBCBackingEngineFactory instance, which enables you to manage the user data using the jaas:* console commands. 2.1.7. JAAS LDAP Login Module Overview The JAAS LDAP login module enables you to store user data in an LDAP database. To manage the stored user data, use a standard LDAP client tool. The jaas:* console commands are not supported. For more details about using LDAP with Red Hat Fuse see LDAP Authentication Tutorial . Note User groups are not supported in the JAAS LDAP login module. Supported credentials The JAAS LDAP Login Module authenticates username/password credentials, returning the list of roles associated with the authenticated user. Implementation classes The following classes implement the JAAS LDAP Login Module: org.apache.karaf.jaas.modules.ldap.LDAPLoginModule Implements the JAAS login module. It is preloaded in the Karaf container, so you do not need to install its bundle. Note There is no backing engine factory for the LDAP Login Module, which means that this module cannot be managed using the jaas:* console commands. Options The JAAS LDAP login module supports the following options: authentication Specifies the authentication method used when binding to the LDAP server. Valid values are simple - bind with user name and password authentication, requiring you to set the connection.username and connection.password properties. none - bind anonymously. In this case the connection.username and connection.password properties can be left unassigned. Note The connection to the directory server is used only for performing searches. In this case, an anonymous bind is often preferred, because it is faster than an authenticated bind (but you would also need to ensure that the directory server is sufficiently protected, for example by deploying it behind a firewall). connection.url Specifies specify the location of the directory server using an ldap URL, ldap:// Host : Port . You can optionally qualify this URL, by adding a forward slash, / , followed by the DN of a particular node in the directory tree. To enable SSL security on the connection, you need to specify the ldaps: scheme in the URL- for example, ldaps:// Host : Port . You can also specify multiple URLs, as a space-separated list, for example: connection.username Specifies the DN of the user that opens the connection to the directory server. For example, uid=admin,ou=system . If the DN contains a whitespace, LDAPLoginModule cannot parse it. The only solution is to add double quotes around the DN name that contains the whitespace and then add the backslash to escape the quotes. For example, uid=admin,ou=\"system index\" . connection.password Specifies the password that matches the DN from connection.username . In the directory server, the password is normally stored as a userPassword attribute in the corresponding directory entry. context.com.sun.jndi.ldap.connect.pool If true , enables connection pooling for LDAP connections. Default is false . context.com.sun.jndi.ldap.connect.timeout Specifies the timeout for creating a TCP connection to the LDAP server, in units of milliseconds. We recommend that you set this property explicitly, because the default value is infinite, which can result in a hung connection attempt. context.com.sun.jndi.ldap.read.timeout Specifies the read timeout for an LDAP operation, in units of milliseconds. We recommend that you set this property explicitly, because the default value is infinite. context.java.naming.referral An LDAP referral is a form of indirection supported by some LDAP servers. The LDAP referral is an entry in the LDAP server which contains one or more URLs (usually referencing a node or nodes in another LDAP server). The context.java.naming.referral property can be used to enable or disable referral following. It can be set to one of the following values: follow to follow the referrals (assuming it is supported by the LDAP server), ignore to silently ignore all referrals, throw to throw a PartialResultException whenever a referral is encountered. disableCache The user and role caches can be disabled by setting this property to true . Default is false . initial.context.factory Specifies the class of the context factory used to connect to the LDAP server. This must always be set to com.sun.jndi.ldap.LdapCtxFactory . role.base.dn Specifies the DN of the subtree of the DIT to search for role entries. For example, ou=groups,ou=system . role.filter Specifies the LDAP search filter used to locate roles. It is applied to the subtree selected by role.base.dn . For example, (member=uid=%u) . Before being passed to the LDAP search operation, the value is subjected to string substitution, as follows: %u is replaced by the user name extracted from the incoming credentials, and %dn is replaced by the RDN of the corresponding user in the LDAP server (which was found by matching against the user.filter filter). %fqdn is replaced by the DN of the corresponding user in the LDAP server (which was found by matching against the user.filter filter). role.mapping Specifies the mapping between LDAP groups and JAAS roles. If no mapping is specified, the default mapping is for each LDAP group to map to the corresponding JAAS role of the same name. The role mapping is specified with the following syntax: Where each LDAP group, ldap-group , is specified by its Common Name (CN). For example, given the LDAP groups, admin , devop , and tester , you could map them to JAAS roles, as follows: role.name.attribute Specifies the attribute type of the role entry that contains the name of the role/group. If you omit this option, the role search feature is effectively disabled. For example, cn . role.search.subtree Specifies whether the role entry search scope includes the subtrees of the tree selected by role.base.dn . If true , the role lookup is recursive ( SUBTREE ). If false , the role lookup is performed only at the first level ( ONELEVEL ). ssl Specifies whether the connection to the LDAP server is secured using SSL. If connection.url starts with ldaps:// SSL is used regardless of this property. ssl.provider Specifies the SSL provider to use for the LDAP connection. If not specified, the default SSL provider is used. ssl.protocol Specifies the protocol to use for the SSL connection. You must set this property to TLSv1 , in order to prevent the SSLv3 protocol from being used (POODLE vulnerability). ssl.algorithm Specifies the algorithm used by the trust store manager. For example, PKIX . ssl.keystore The ID of the keystore that stores the LDAP client's own X.509 certificate (required only if SSL client authentication is enabled on the LDAP server). The keystore must be deployed using a jaas:keystore element (see the section called "Sample configuration for Apache DS" ). ssl.keyalias The keystore alias of the LDAP client's own X.509 certificate (required only if there is more than one certificate stored in the keystore specified by ssl.keystore ). ssl.truststore The ID of the keystore that stores trusted CA certificates, which are used to verify the LDAP server's certificate (the LDAP server's certificate chain must be signed by one of the certificates in the truststore). The keystore must be deployed using a jaas:keystore element. user.base.dn Specifies the DN of the subtree of the DIT to search for user entries. For example, ou=users,ou=system . user.filter Specifies the LDAP search filter used to locate user credentials. It is applied to the subtree selected by user.base.dn . For example, (uid=%u) . Before being passed to the LDAP search operation, the value is subjected to string substitution, as follows: %u is replaced by the user name extracted from the incoming credentials. user.search.subtree Specifies whether the user entry search scope includes the subtrees of the tree selected by user.base.dn . If true , the user lookup is recursive ( SUBTREE ). If false , the user lookup is performed only at the first level ( ONELEVEL ). Sample configuration for Apache DS The following Blueprint configuration shows how to define a new karaf realm using the LDAP login module, where the default karaf realm is overridden by setting the rank attribute to 200 , and the LDAP login module connects to an Apache Directory Server: Note In order to enable SSL, you must remember to use the ldaps scheme in the connection.url setting. Important You must set ssl.protocol to TLSv1 (or later), in order to protect against the Poodle vulnerability (CVE-2014-3566) Filter settings for different directory servers The most significant differences between directory servers arise in connection with setting the filter options in the LDAP login module. The precise settings depend ultimately on the organisation of your DIT, but the following table gives an idea of the typical role filter settings required for different directory servers: Directory Server Typical Filter Settings 389-DS Red Hat DS MS Active Directory Apache DS OpenLDAP Note In the preceding table, the & symbol (representing the logical And operator) is escaped as & because the option settings will be embedded in a Blueprint XML file. 2.1.8. JAAS Log Audit Login Module The login module org.apache.karaf.jaas.modules.audit.LogAuditLoginModule provides robust logging of authentication attempts. It supports standard log management capabilities such as setting a maximum file size, log rotation, file compression, and filtering. You establish settings for these options in the logging configuration file. By default, authentication audit logging is disabled. Enabling logging requires you to define a logging configuration and an audit configuration, and then link the two together. In the logging configuration you specify properties for a file appender process, and a logger process. The file appender publishes information about authentication events to a specified file. The logger is a mechanism that captures information about authentication events and makes it available to the appenders that you specify. You define the logging configuration in the standard Karaf Log4j logging configuration file, etc/org.ops4j.pax.logging.cfg . The audit configuration enables audit logging and links to the logging infrastructure to be used. You define the audit configuration in the file etc/org.apache.karaf.jaas.cfg . Appender configuration By default, the standard Karaf Log4j configuration file ( etc/org.ops4j.pax.logging.cfg ) defines an audit logging appender with the name AuditRollingFile . The following excerpt from a sample configuration file shows the properties of an appender that writes to an audit log file at USD{karaf.data}/security/audit.log : To use the appender, you must configure a logger that provides the information for the appender to publish to a log file. Logger configuration By default, the Karaf Log4j configuration file etc/org.ops4j.pax.logging.cfg establishes an audit logger with the name org.apache.karaf.jaas.modules.audit . In the following excerpt from a sample configuration file, the default logger is configured to provide information about authentication events to an appender with the name AuditRollingFile : The value of log4j2.logger.audit.appenderRef.AuditRollingFile.ref must match the value of log4j2.appender.audit.name in the Audit file appender section of etc/org.ops4j.pax.logging.cfg . 2.1.8.1. Enabling Authentication Audit Logging After you establish the logging configuration, you can turn on audit logging and connect the logging configuration to the audit configuration. To enable audit logging, insert the following lines in etc/org.apache.karaf.jaas.cfg : The <logger.name> represents in dot-separated format any standard logger (category) name that is established by the Apache Log4J and Log4J2 libraries, for example, org.jboss.fuse.audit or com.example.audit . The <level>` represents a log level setting, such as WARN , INFO , TRACE , or DEBUG . For example, in the following excerpt from a sample audit configuration file, the audit log is enabled and it is configured to use the audit logger with the name org.apache.karaf.jaas.modules.audit : The value for audit.log.logger must match the value of log4j2.logger.audit.name in the Karaf Log4j configuration file ( etc/org.ops4j.pax.logging.cfg ). After you update a file, the Apache Felix File Install bundle detects the change and updates the configuration in the Apache Felix Configuration Administration Service ( Config Admin ). The settings from the Config Admin are then passed to the logging infrastructure. Apache Karaf shell commands for updating configuration files You can edit configuration files in <FUSE_HOME>/etc directly, or you can run Apache Karaf config:* commands to update the Config Admin. When you use the config* commands to update the configuration, the Apache Felix File Install bundle is notified about the changes and automatically updates the relevant etc/*.cfg files. Example: Using a config command to list the properties for the JAAS realm To list the properties in the JAAS realm, from a shell prompt, type the following command: config:property-list --pid org.apache.karaf.jaas The command returns the current properties for the realm, for example: Example: Using a config command to change the audit log level To change the audit log level for the realm to DEBUG , from a shell prompt, type the following command: config:property-set --pid org.apache.karaf.jaas audit.log.level DEBUG To verify that the change is effective, list the properties again to check the value for audit.log.level . 2.1.9. JAAS File Audit Login Module The authentication module org.apache.karaf.jaas.modules.audit.FileAuditLoginModule provides basic logging of authentication attempts. The File Audit Login module writes directly to a specified file. Configuration is simple, because it does not rely on the Pax logging infrastructure. But unlike the Log Audit Login Module , it does not support log management features, such as pattern filtering, log file rotation, and so forth. To enable audit logging with the FileAuditLoginModule , insert the following lines in etc/org.apache.karaf.jaas.cfg : Note Typically, you would not configure audit logging through both the File Audit Login Module and the Log Audit Login Module . If you enable logging through both modules, you can avoid loss of data by configuring each module to use a unique target log file. 2.1.10. Encrypting Stored Passwords By default, the JAAS login modules store passwords in plaintext format. Although you can (and should) protect such data by setting file permissions appropriately, you can provide additional protection to passwords by storing them in an obscured format (using a message digest algorithm). Red Hat Fuse provides a set of options for enabling password encryption, which can be combined with any of the JAAS login modules (except the public key login module, where it is not needed). Important Although message digest algorithms are difficult to crack, they are not invulnerable to attack (for example, see the Wikipedia article on cryptographic hash functions ). Always use file permissions to protect files containing passwords, in addition to using password encryption. Options You can optionally enable password encryption for JAAS login modules by setting the following login module properties. To do so, either edit the InstallDir /etc/org.apache.karaf.jaas.cfg file or deploy your own blueprint file as described in the section called "Example of a login module with Jasypt encryption" . encryption.enabled Set to true , to enable password encryption. encryption.name Name of the encryption service, which has been registered as an OSGi service. encryption.prefix Prefix for encrypted passwords. encryption.suffix Suffix for encrypted passwords. encryption.algorithm Specifies the name of the encryption algorithm- for example, MD5 or SHA-1 . You can specify one of the following encryption algorithms: MD2 MD5 SHA-1 SHA-256 SHA-384 SHA-512 encryption.encoding Encrypted passwords encoding: hexadecimal or base64 . encryption.providerName (Jasypt only) Name of the java.security.Provider instance that is to provide the digest algorithm. encryption.providerClassName (Jasypt only) Class name of the security provider that is to provide the digest algorithm encryption.iterations (Jasypt only) Number of times to apply the hash function recursively. encryption.saltSizeBytes (Jasypt only) Size of the salt used to compute the digest. encryption.saltGeneratorClassName (Jasypt only) Class name of the salt generator. role.policy Specifies the policy for identifying role principals. Can have the values, prefix or group . role.discriminator Specifies the discriminator value to be used by the role policy. Encryption services There are two encryption services provided by Fuse: encryption.name = basic , described in the section called "Basic encryption service" , encryption.name = jasypt , described in the section called "Jasypt encryption" . You can also create your own encryption service. To do so, you need to: Implement the org.apache.karaf.jaas.modules.EncryptionService interface, and Expose your implementation as OSGI service. The following listing shows how to expose a custom encryption service to the OSGI container: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <service interface="org.apache.karaf.jaas.modules.EncryptionService"> <service-properties> <entry key="name" value="jasypt" /> </service-properties> <bean class="org.apache.karaf.jaas.jasypt.impl.JasyptEncryptionService"/> </service> ... </blueprint> Basic encryption service The basic encryption service is installed in the Karaf container by default and you can reference it by setting the encryption.name property to the value, basic . In the basic encryption service, the message digest algorithms are provided by the SUN security provider (the default security provider in the Oracle JDK). Jasypt encryption The Jasypt encryption service is normally installed by default on Karaf. If necessary, you can install it explicitly by installing the jasypt-encryption feature, as follows: This command installs the requisite Jasypt bundles and exports Jasypt encryption as an OSGi service, so that it is available for use by JAAS login modules. For more information about Jasypt encryption, see the Jasypt documentation . Example of a login module with Jasypt encryption By default, the passwords are stored in clear form in the etc/users.properties file. It is possible to enable encryption by installing jasypt-encryption feature and modifying the etc/org.apache.karaf.jaas.cfg configuration file. Install feature jasypt-encryption . This will install the jasypt service. Now you can use the jaas commands to create users. Open the USDFUSE_HOME/etc/org.apache.karaf.jaas.cfg file and modify it as follows. Set the encryption.enabled = true , encryption.name = jasypt , and in this case encryption.algorithm = SHA-256 . There are other encryption.algorithm options available, you can set it as per your requirement. Enter jaas:realms command on the Karaf console to view the deployed login modules. Enter the following commands to create users. Now if you look at the USDFUSE_HOME/etc/users.properties file you can see that the user usertest is added to the file. You can test the newly created login in a different terminal as you have already run a jaas:update command. 2.1.11. JAAS integration with HTTP Basic Authentication You can use Servlet REST to define REST endpoints in Camel routes using REST DSL. The following example shows how the REST endpoint which is protected by HTTP Basic Authentication delegates user authentication to the Karaf JAAS service. Procedure Assuming that you have installed Apache Camel in CamelInstallDir , you can find the example in the following directory: Use Maven to build and install the example as an OSGi bundle. Open a command prompt, switch the current directory to CamelInstallDir/examples/camel-example-servlet-rest-karaf-jaas , and enter the following command: To copy the security configuration file to the KARAF_HOME/etc folder, enter the following command: To install Apache Camel in Karaf enter the following commands in the Karaf shell console: The camel-servlet , camel-jackson and war Karaf features are also required, enter the following commands to install these features: To install the camel-example-servlet-rest-karaf-jaas example, enter the following command: Result To confirm that the application is running, you can view the application log files by entering the following command (use ctrl+c to stop tailing the log): The REST user endpoint supports the following operations: GET /user/{id} - to view a user with the given id GET /user/final - to view all users PUT /user - to update/create an user Note The view operations uses HTTP GET , and update the update operation uses HTTP PUT . 2.1.11.1. Accessing the REST service from a web browser From a web browser you can access the services using the following examples (you need to input admin as the user and admin as the password in the pop-up dialog box): Example: View user id 123 Example: List all users 2.1.11.2. Accessing the REST service from the command line From the command line you can use curl to access the REST user endpoint as demonstrated in the following examples: Example: View user id 123 Example: View all users Example: Create or update user id 234 2.2. Role-Based Access Control Abstract This section describes the role-based access control (RBAC) feature, which is enabled by default in the Karaf container. You can immediately start taking advantage of the RBAC feature, simply by adding one of the standard roles (such as manager or admin ) to a user's credentials. For more advanced usage, you have the option of customizing the access control lists, in order to control exactly what each role can do. Finally, you have the option of applying custom ACLs to your own OSGi services. 2.2.1. Overview of Role-Based Access Control By default, the Fuse role-based access control protects access through the Fuse Management Console, JMX connections, and the Karaf command console. To use the default levels of access control, simply add any of the standard roles to your user authentication data (for example, by editing the users.properties file). You also have the option of customizing access control, by editing the relevant Access Control List (ACL) files. Mechanisms Role-based access control in Karaf is based on the following mechanisms: JMX Guard The Karaf container is configured with a JMX guard, which intercepts every incoming JMX invocation and filters the invocation through the configured JMX access control lists. The JMX guard is configured at the JVM level, so it intercepts every JMX invocation, without exception. OSGi Service Guard For any OSGi service, it is possible to configure an OSGi service guard. The OSGi service guard is implemented as a proxy object, which interposes itself between the client and the original OSGi service. An OSGi service guard must be explicitly configured for each OSGi service: it is not installed by default (except for the OSGi services that represent Karaf console commands, which are preconfigured for you). Types of protection The Fuse implementation of role-based access control is capable of providing the following types of protection: Fuse Console (Hawtio) Container access through the Fuse Console (Hawtio) is controlled by the JMX ACL files. The REST/HTTP service that provides the Fuse Console is implemented using Jolokia technology, which is layered above JMX. Hence, ultimately, all Fuse Console invocations pass through JMX and are regulated by JMX ACLs. JMX Direct access to the Karaf container's JMX port is regulated by the JMX ACLs. Moreover, any additional JMX ports opened by an application running in the Karaf container would also be regulated by the JMX ACLs, because the JMX guard is set at the JVM level. Karaf command console Access to the Karaf command console is regulated by the command console ACL files. Access control is applied no matter how the Karaf console is accessed. Whether accessing the command console through the Fuse Console or through the SSH protocol, access control is applied in both cases. Note In the special case where you start up the Karaf container directly at the command line (for example, using the ./bin/fuse script) and no user authentication is performed, you automatically get the roles specified by the karaf.local.roles property in the etc/system.properties file. OSGi services For any OSGi service deployed in the Karaf container, you can optionally enable an ACL file, which restricts method invocations to specific roles. Adding roles to users In the system of role-based access control, you can give users permissions by adding roles to their user authentication data. For example, the following entry in the etc/users.properties file defines the admin user and grants the admin role. You also have the option of defining user groups and then assigning users to a particular user group. For example, you could define and use an admingroup user group as follows: Note User groups are not supported by every type of JAAS login module. Standard roles Table 2.2, "Standard Roles for Access Control" lists and describes the standard roles that are used throughout the JMX ACLs and the command console ACLs. Table 2.2. Standard Roles for Access Control Roles Description viewer Grants read-only access to the Karaf container. manager Grants read-write access at the appropriate level for ordinary users, who want to deploy and run applications. But blocks access to sensitive Karaf container configuration settings. admin Grants unrestricted access to the Karaf container. ssh Grants users permission to connect to the Karaf command console (through the ssh port). ACL files The standard set of ACL files are located under the etc/auth/ directory of the Fuse installation, as follows: etc/auth/jmx.acl[.*].cfg JMX ACL files. etc/auth/org.apache.karaf.command.acl.*.cfg Command console ACL files. Customizing role-based access control A complete set of JMX ACL files and command console ACL files are provided by default. You are free to customize these ACLs as required to suit the requirements of your system. Details of how to do this are given in the following sections. Additional properties for controlling access The system.properties file under the etc directory provides the following additional properties for controlling access through the Karaf command console and the Fuse Console (Hawtio): karaf.local.roles Specifies the roles that apply when a user starts up the Karaf container console locally (for example, by running the script). hawtio.roles Specifies the roles that are allowed to access the Karaf container through the Fuse Console. This constraint is applied in addition to the access control defined by the JMX ACL files. karaf.secured.command.compulsory.roles Specifies the default roles required to invoke a Karaf console command, in case the console command is not configured explicitly by a command ACL file, etc/auth/org.apache.karaf.command.acl.*.cfg . A user must be configured with at least one of the roles from the list in order to invoke the command. The value is specified as a comma-separated list of roles. 2.2.2. Customizing the JMX ACLs The JMX ACLs are stored in the OSGi Config Admin Service and are normally accessible as the files, etc/auth/jmx.acl.*.cfg . This section explains how you can customize the JMX ACLs by editing these files yourself. Architecture Figure 2.1, "Access Control Mechanism for JMX" shows an overview of the role-based access control mechanism for JMX connections to the Karaf container. Figure 2.1. Access Control Mechanism for JMX How it works JMX access control works by providing remote access to JMX through a special javax.management.MBeanServer object. This object acts as a proxy by invoking an org.apache.karaf.management.KarafMBeanServerGuard object, which is referred to as JMX guard. JMX guard is available without special configuration in startup files. JMX access control is applied as follows: For every non-local JMX invocation, JMX guard is called before the actual MBean invocation. The JMX Guard looks up the relevant ACL for the MBean the user is trying to access (where the ACLs are stored in the OSGi Config Admin service). The ACL returns the list of roles that are allowed to make this particular invocation on the MBean. The JMX Guard checks the list of roles against the current security subject (the user that is making the JMX invocation), to see whether the current user has any of the required roles. If no matching role is found, the JMX invocation is blocked and a SecurityException is raised. Location of JMX ACL files The JMX ACL files are located in the InstallDir /etc/auth directory, where the ACL file names obey the following convention: Technically, the ACLs are mapped to OSGi persistent IDs (PIDs), matching the pattern, jmx.acl[.*] . It just so happens that the Karaf container stores OSGi PIDs as files, PID.cfg , under the etc/ directory by default. Mapping MBeans to ACL file names The JMX Guard applies access control to every MBean class that is accessed through JMX (including any MBeans you define in your own application code). The ACL file for a specific MBean class is derived from the MBean's Object Name, by prefixing it with jmx.acl . For example, given the MBean whose Object Name is given by org.apache.camel:type=context , the corresponding PID would be: The OSGi Config Admin service stores this PID data in the following file: ACL file format Each line of a JMX ACL file is an entry in the following format: Where Pattern is a pattern that matches a method invocation on an MBean, and the right-hand side of the equals sign is a comma-separated list of roles that give a user permission to make that invocation. In the simplest cases, the Pattern is simply a method name. For example, as in the following settings for the jmx.acl.hawtio.OSGiTools MBean (from the jmx.acl.hawtio.OSGiTools.cfg file): It is also possible to use the wildcard character, * , to match multiple method names. For example, the following entry gives permission to invoke all method names starting with set : But the ACL syntax is also capable of defining much more fine-grained control of method invocations. You can define patterns to match methods invoked with specific arguments or even arguments that match a regular expression. For example, the ACL for the org.apache.karaf.config MBean package exploits this capability to prevent ordinary users from modifying sensitive configuration settings. The create method from this package is restricted, as follows: In this case, the manager role generally has permission to invoke the create method, but only the admin role has permission to invoke create with a PID argument matching jmx.acl.* , org.apache.karaf.command.acl.* , or org.apache.karaf.service.* . For complete details of the ACL file format, please see the comments in the etc/auth/jmx.acl.cfg file. ACL file hierarchy Because it is often impractical to provide an ACL file for every single MBean, you have the option of specifying an ACL file at the level of a Java package, which provides default settings for all of the MBeans in that package. For example, the org.apache.cxf.Bus MBean could be affected by ACL settings at any of the following PID levels: Where the most specific PID (top of the list) takes precedence over the least specific PID (bottom of the list). Root ACL definitions The root ACL file, jmx.acl.cfg , is a special case, because it supplies the default ACL settings for all MBeans. The root ACL has the following settings by default: This implies that the typical read method patterns ( list* , get* , is* ) are accessible to all standard roles, but the typical write method patterns and other methods ( set* and \* ) are accessible only to the admin role, admin . Package ACL definitions Many of the standard JMX ACL files provided in etc/auth/jmx.acl[.*].cfg apply to MBean packages. For example, the ACL for the org.apache.camel.endpoints MBean package is defined with the following permissions: ACL for custom MBeans If you define custom MBeans in your own application, these custom MBeans are automatically integrated with the ACL mechanism and protected by the JMX Guard when you deploy them into the Karaf container. By default, however, your MBeans are typically protected only by the default root ACL file, jmx.acl.cfg . If you want to define a more fine-grained ACL for your MBean, create a new ACL file under etc/auth , using the standard JMX ACL file naming convention. For example, if your custom MBean class has the JMX Object Name, org.example:type=MyMBean , create a new ACL file under the etc/auth directory called: Dynamic configuration at run time Because the OSGi Config Admin service is dynamic, you can change ACL settings while the system is running, and even while a particular user is logged on. Hence, if you discover a security breach while the system is running, you can immediately restrict access to certain parts of the system by editing the relevant ACL file, without having to restart the Karaf container. 2.2.3. Customizing the Command Console ACLs The command console ACLs are stored in the OSGi Config Admin Service and are normally accessible as the files, etc/auth/org.apache.karaf.command.acl.*.cfg . This section explains how you can customize the command console ACLs by editing these files yourself. Architecture Figure 2.2, "Access Control Mechanism for OSGi Services" shows an overview of the role-based access control mechanism for OSGi services in the Karaf container. Figure 2.2. Access Control Mechanism for OSGi Services How it works The mechanism for command console access control is, in fact, based on the generic access control mechanism for OSGi services. It so happens that console commands are implemented and exposed as OSGi services. The Karaf console itself discovers the available commands through the OSGi service registry and accesses the commands as OSGi services. Hence, the access control mechanism for OSGi services can be used to control access to console commands. The mechanism for securing OSGi services is based on OSGi Service Registry Hooks. This is an advanced OSGi feature that makes it possible to hide OSGi services from certain consumers and to replace an OSGi service with a proxy service. When a service guard is in place for a particular OSGi service, a client invocation on the OSGi service proceeds as follows: The invocation does not go directly to the requested OSGi service. Instead, the request is routed to a replacement proxy service, which has the same service properties as the original service (and some extra ones). The service guard looks up the relevant ACL for the target OSGi service (where the ACLs are stored in the OSGi Config Admin service). The ACL returns the list of roles that are allowed to make this particular method invocation on the service. If no ACL is found for this command, the service guard defaults to the list of roles specified in the karaf.secured.command.compulsory.roles property in the etc/system.properties file. The service guard checks the list of roles against the current security subject (the user that is making the method invocation), to see whether the current user has any of the required roles. If no matching role is found, the method invocation is blocked and a SecurityException is raised. Alternatively, if a matching role is found, the method invocation is delegated to the original OSGi service. Configuring default security roles For any commands that do not have a corresponding ACL file, you specify a default list of security roles by setting the karaf.secured.command.compulsory.roles property in the etc/system.properties file (specified as a comma-separated list of roles). Location of command console ACL files The command console ACL files are located in the InstallDir /etc/auth directory, with the prefix, org.apache.karaf.command.acl . Mapping command scopes to ACL file names The command console ACL file names obey the following convention: Where the CommandScope corresponds to the prefix for a particular group of Karaf console commands. For example, the feature:install and features:uninstall commands belong to the feature command scope, which has the corresponding ACL file, org.apache.karaf.command.acl.features.cfg . ACL file format Each line of a command console ACL file is an entry in the following format: Where Pattern is a pattern that matches a Karaf console command from the current command scope, and the right-hand side of the equals sign is a comma-separated list of roles that give a user permission to make that invocation. In the simplest cases, the Pattern is simply an unscoped command name. For example, the org.apache.karaf.command.acl.feature.cfg ACL file includes the following rules for the feature commands: Important If no match is found for a specific command name, it is assumed that no role is required for this command and it can be invoked by any user. You can also define patterns to match commands invoked with specific arguments or even arguments that match a regular expression. For example, the org.apache.karaf.command.acl.bundle.cfg ACL file exploits this capability to prevent ordinary users from invoking the bundle:start and bundle:stop commands with the -f (force) flag (which must be specified to manage system bundles). This restriction is coded as follows in the ACL file: In this case, the manager role generally has permission to invoke the bundle:start and bundle:stop commands, but only the admin role has permission to invoke these commands with the force option, -f . For complete details of the ACL file format, please see the comments in the etc/auth/org.apache.karaf.command.acl.bundle.cfg file. Dynamic configuration at run time The command console ACL settings are fully dynamic, which means you can change the ACL settings while the system is running and the changes will take effect within a few seconds, even for users that are already logged on. 2.2.4. Defining ACLs for OSGi Services It is possible to define a custom ACL for any OSGi service (whether system level or application level). By default, OSGi services do not have access control enabled (with the exception of the OSGi services that expose Karaf console commands, which are pre-configured with command console ACL files). This section explains how to define a custom ACL for an OSGi service and how to invoke methods on that service using a specified role. ACL file format An OSGi service ACL file has one special entry, which identifies the OSGi service to which this ACL applies, as follows: Where the value of service.guard is an LDAP search filter that is applied to the registry of OSGi service properties in order to pick out the matching OSGi service. The simplest type of filter, (objectClass= InterfaceName ) , picks out an OSGi service with the specified Java interface name, InterfaceName . The remaining entries in the ACL file are of the following form: Where Pattern is a pattern that matches a service method, and the right-hand side of the equals sign is a comma-separated list of roles that give a user permission to make that invocation. The syntax of these entries is essentially the same as the entries in a JMX ACL file-see the section called "ACL file format" . How to define an ACL for a custom OSGi service To define an ACL for a custom OSGi service, perform the following steps: It is customary to define an OSGi service using a Java interface (you could use a regular Java class, but this is not recommended). For example, consider the Java interface, MyService , which we intend to expose as an OSGi service: To expose the Java interface as an OSGi service, you would typically add a service element to an OSGi Blueprint XML file (where the Blueprint XML file is typically stored under the src/main/resources/OSGI-INF/blueprint directory in a Maven project). For example, assuming that MyServiceImpl is the class that implements the MyService interface, you could expose the MyService OSGi service as follows: To define an ACL for the the OSGi service, you must create an OSGi Config Admin PID with the prefix, org.apache.karaf.service.acl . For example, in the case of a Karaf container (where the OSGi Config Admin PIDs are stored as .cfg files under the etc/auth/ directory), you can create the following ACL file for the MyService OSGi service: Note It does not matter exactly how you name this file, as long as it starts with the required prefix, org.apache.karaf.service.acl . The corresponding OSGi service for this ACL file is actually specified by a property setting in this file (as you will see in the step). Specify the contents of the ACL file in a format like the following: The service.guard setting specifies the InterfaceName of the OSGi service (using the syntax of an LDAP search filter, which is applied to the OSGi service properties). The other entries in the ACL file consist of a method Pattern , which associates a matching method to the specified roles. For example, you could define a simple ACL for the MyService OSGi service with the following settings in the org.apache.karaf.service.acl.myservice.cfg file: Finally, in order to enable the ACL for this OSGi service, you must edit the karaf.secured.services property in the etc/system.properties file. The value of the karaf.secured.services property has the syntax of an LDAP search filter (which gets applied to the OSGi service properties). In general, to enable ACLs for an OSGi service, ServiceInterface , you must modify this property as follows: For example, to enable the MyService OSGi service: The initial value of the karaf.secured.services property has the settings to enable the command console ACLs. If you delete or corrupt these entries, the command console ACLs might stop working. How to invoke an OSGi service secured with RBAC If you are writing Java code to invoke methods on a custom OSGi service (that is, implementing a client of the OSGi service), you must use the Java security API to specify the role you are using to invoke the service. For example, to invoke the MyService OSGi service using the manager role, you could use code like the following: Note This example uses the Karaf role type, org.apache.karaf.jaas.boot.principal.RolePrincipal . If necessary, you could use your own custom role class instead, but in that case you would have to specify your roles using the syntax className : roleName in the OSGi service's ACL file. How to discover the roles required by an OSGi service When you are writing code against an OSGi service secured by an ACL, it can sometimes be useful to check what roles are allowed to invoke the service. For this purpose, the proxy service exports an additional OSGi property, org.apache.karaf.service.guard.roles . The value of this property is a java.util.Collection object, which contains a list of all the roles that could possibly invoke a method on that service. 2.3. How to Use Encrypted Property Placeholders When securing a Karaf container, do not use plain text passwords in configuration files. One way to avoid using plain text passwords is to use encrypted property placeholders whenever possible. See the following topics for details: Section 2.3.1, "About the master password for encrypting values" Section 2.3.2, "Using encrypted property placeholders" Section 2.3.3, "Invoking the jasypt:digest command" Section 2.3.4, "Invoking the jasypt:decrypt command" 2.3.1. About the master password for encrypting values To use Jasypt to encrypt a value, a master password is required. It is up to you or an administrator to choose the master password. Jasypt provides several ways to set the master password. One way is to specify the master password in plain text in a Blueprint configuration, for example: Instead of specifying the master password in plain text, you can do one of the following: Set an environment variable to your master password. In the Blueprint configuration file, specify this environment variable as the value of the passwordEnvName property. For example, if you set the MASTER_PW environment variable to your master password, then you would have this entry in your Blueprint configuration file: <property name="passwordEnvName" value="MASTER_PW"> Set a Karaf system property to your master password. In the Blueprint configuration file, specify this system property as the value of the passwordSys property. For example, if you set the karaf.password system property to your master password, then you would have this entry in your Blueprint configuration file: <property name="passwordSys" value="karaf.password"> 2.3.2. Using encrypted property placeholders Use encrypted property placeholders in Blueprint configuration files when securing a Karaf container. Prerequisites You know the master password for encrypting values. Procedure Plan to use the default encryption algorithm, which is PBEWithMD5AndDES , or choose the encryption algorithm to use as follows: Discover which algorithms are supported in your current Java environment by running the jasypt:list-algorithms command: There are no arguments or options. The output is a list of the identifiers for supported digest and Password Based Encryption (PBE) algorithms. The list includes algorithms provided by the Bouncy Castle library, which is part of Fuse 7.13. This list can be long. A short portion of it would look like this: Examine the list and find the identifier for the encryption algorithm that you want to use. You might want to consult with security experts at your site for help with choosing the algorithm. To encrypt a sensitive configuration value, such as a password to be used in a configuration file, run the jasypt:encrypt command. The command has the following format: jasypt:encrypt [ options ] [ input ] When you invoke this command without specifying any options, and you do not specify the value that you want to encrypt, the command prompts you for your master password and for the value to encrypt, and applies defaults for other options. For example: Invoke the jasypt:encrypt command for each value that you want to encrypt. To change the default behavior, specify one or more of the following options: Option Description Example -w or --password-property Follow this option with an environment variable or a system property that is set to the value of your master password. Jasypt uses this value, in conjunction with an encryption algorithm, to derive the encryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. -w MASTER_PW -W or --password Follow this option with the plain text value of your chosen master password. The plain text value of your master password appears in history. Jasypt uses this value, in conjunction with an encryption algorithm, to derive the encryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. -W "M@s!erP#" -a or --algorithm Follow this option with the identifier for the algorithm that you want the jasypt:encrypt command to use to derive the initial cryptographic key. The default is PBEWithMD5AndDES . All algorithms that are in the list that the jasypt-list-algorithms command outputs are supported. Auto-completion is available when specifying algorithm names on the command line. For example: -a PBEWITHMD5ANDRC2 -i or --iterations Follow this option with an integer that indicates the number of times to iteratively create a hash of the initial key. Each iteration takes the hash result and hashes it again. The result is the final encryption key. The default is 1000. For example: -i 5000 -h or --hex Specify this option to obtain hexadecimal output. The default output is Base64. For example: -h --help Displays information about command syntax and options. jasypt:encrypt --help Create a properties file that contains the encrypted values that you obtained by running the jasypt:encrypt command. Wrap each encrypted value in the ENC() function. For example, suppose you want to store some LDAP credentials in the etc/ldap.properties file. The file content would be something like this: Add the required namespaces for the encrypted property placeholders to your blueprint.xml file. These namespaces are for Aries extensions and Apache Karaf Jasypt. For example: Configure the identifier for the Jasypt encryption algorithm that you used and the location of the properties file. The following example shows how to: Configure the ext:property-placeholder element to read properties from the etc/ldap.properties file. Configure the enc:property-placeholder element to: Identify the PBEWithMD5AndDES encryption algorithm. Read the master password from an environment variable, JASYPT_ENCRYPTION_PASSWORD , that you defined in the Karaf bin/setenv file. Configuring the initialization vector property The following algorithms require an initialization vector property named ivGenerator to be added to the blueprint configuration: The following example shows how to add the ivGenerator property to the blueprint configuration, if required: LDAP JAAS realm configuration that uses encrypted property placeholders The following example adds to the blueprint.xml file in the example by showing an LDAP JAAS realm configuration that uses Jasypt encrypted property placeholders. Note When you use the process described in this topic to encrypt properties you cannot use the @PropertyInject annotation to decrypt the properties. Instead, use XML to inject properties into Java objects, as shown in this Blueprint example. In this example, during container initialization, the USD{ldap.password} placeholder is replaced with the decrypted value of the ldap.password property from the etc/ldap.properties file. Examples of specifying environment variables or system properties Rather than specifying your plain text master password when you encrypt a value, you can specify an environment variable or a system property that is set to your master password. For example, suppose that the bin/setenv file contains: You can encrypt a value with this command: If your etc/system.properties file contains: You can encrypt a value with this command: 2.3.3. Invoking the jasypt:digest command A Jasypt digest is the result of applying cryptographic hash functions, such as MD5, to a value. Generating a digest is a type of one-way encryption. You cannot generate a digest and then reconstruct the original value from the digest. For especially sensitive values, you might want to generate a digest rather than encrypting a value. You can then specify the digest as a property placeholder. The format for invoking the command to generate a digest is as follows: jasypt:digest [ options ] [ input ] If you do not specify any options, and you do not specify the input for which to create a digest, the command prompts you to specify the value that you want to encrypt and applies default values for options. For example: The following example shows specification of the input argument on the command line: This command applies default options and generates a digest that provides a one-way encryption of ImportantPassword . The command output looks something like this: Invoke the jasypt:digest command for each value for which you want one-way encryption. To change the default behavior, specify one or more of the following options: Option Description Example -a or --algorithm Follow this option with the identifier for the digest algorithm that you want the jasypt:digest command to use to generate the digest. The default is MD5 . All digest algorithms that are in the list that the jasypt-list-algorithms command outputs are supported. Auto-completion is available when specifying algorithm names on the command line. For example: -a SHA-12 -i or --iterations Follow this option with an integer that indicates the number of times to iteratively create a hash of the initial digest. Each iteration takes the hash result and hashes it again. The result is the final digest. The default is 1000. For example: -i 5000 -s or --salt-size Follow this option with an integer that indicates the number of bytes in the salt that jasypt:digest applies to create the digest. This is useful when you want to generate a digest for a sensitive value and you need to specify the digest in more than one location. For example, you can invoke jasypt:digest with the same input value but with different salt sizes. Each command generates a different digest even though the input was the same. The default is 8. For example: -s 12 -h or --hex Specify this option to obtain hexadecimal output. The default output is Base64. For example: -h --help Displays information about command syntax and options. jasypt:digest --help After you obtain a digest, you can use it in the same way as described in Using encrypted property placeholders . If you use non-default values, the calculation takes longer. For example: 2.3.4. Invoking the jasypt:decrypt command To verify the original value of an encrypted placeholder, use the jasypt:decrypt command on the placeholder. Prerequisites You must have generated the placeholder by invoking the jasypt:encrypt command. You must know: The master password, or the environment variable or system property you use as the master password. The encryption algorithm used with jasypt:encrypt . The number of jasypt:encrypt iterations. The format for invoking the jasypt:decrypt command is as follows: jasypt:decrypt [ options ] [ input ] Note You can run the command without specifying options and input , but only if using the defaults with the jasypt:encrypt command. In this case, you must provide the master password and the value to decrypt. All other options will have default values. Example In this case, you enter the master password and data to decrypt at the prompt. The default algorithm PBEWithMD5AndDES creates a decryption key to decrypt the value: 2.3.4.1. Specifying options for jasypt:decrypt To change the default behavior, specify one or more of the following options: Option Description Note Example -w or --password-property Environment variable or a system property set to the value of your master password. Jasypt uses this value, together with the decryption algorithm, to create the initial decryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. -w MASTER_PW -W or --password Follow this option with the plain text value of your chosen master password. The plain text value of your master password appears in history. Jasypt uses this value, in conjunction with the decryption algorithm, to derive the initial decryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. -W "M@s!erP#" -a or --algorithm Follow this option with the identifier for the algorithm that you want the jasypt:decrypt command to use to derive the initial decryption key. The default is PBEWithMD5AndDES . All algorithms in the list that the jasypt-list-algorithms command outputs are supported. Auto-completion is available when specifying algorithm names on the command line. The jasypt:decrypt command must use the same algortithm that the jasypt:encrypt command used to generate the specified placeholder input. -a PBEWITHMD5ANDRC2 -i or --iterations Follow this option with an integer that indicates the number of times to iteratively create a hash of the initial key. Each iteration takes the hash result and hashes it again. The result is the final decryption key. The default is 1000. The jasypt:decrypt command must use the same number of iterations that the jasypt:encrypt command used to generate the specified placeholder input. -i 5000 -h or --hex Specify this option to obtain hexadecimal output. The default output is Base64. -h -E or --use-empty-iv-generator Use fixed IV generator for decryption of passwords encrypted with versions of Jasypt. -E --help Displays information about command syntax and options. --help 2.3.4.2. Specifying environment variables or system properties You can use environment variables or system properties for the jasypt:decrypt command, instead of adding the values as parameters to the command. 2.3.4.2.1. Using an environment variable To use an environment variablem, add the parameter to your bin/setenv file. Example You can use the environment variable MASTER_PASSWORD to decrypt a value: Example 2.3.4.2.2. Using a system property To use an environment variablem, add the parameter to your etc/system.properties file. Example You can use this system property, master.password , to decrypt a value: Example 2.4. Enabling Remote JMX SSL Overview Red Hat JBoss Fuse provides a JMX port that allows remote monitoring and management of Karaf containers using MBeans. By default, however, the credentials that you send over the JMX connection are unencrypted and vulnerable to snooping. To encrypt the JMX connection and protect against password snooping, you need to secure JMX communications by configuring JMX over SSL. To configure JMX over SSL, perform the following steps: Create the jbossweb.keystore file Create and deploy the keystore.xml file Add the required properties to org.apache.karaf.management.cfg Restart the Fuse container After you have configured JMX over SSL access, you should test the connection. Warning If you are planning to enable SSL/TLS security, you must ensure that you explicitly disable the SSLv3 protocol, in order to safeguard against the Poodle vulnerability (CVE-2014-3566) . For more details, see Disabling SSLv3 in JBoss Fuse 6.x and JBoss A-MQ 6.x . Note If you configure JMX over SSL while Red Hat JBoss Fuse is running, you will need to restart it. Prerequisites If you haven't already done so, you need to: Set your JAVA_HOME environment variable Configure a Karaf user with the admin role Edit the InstallDir /etc/users.properties file and add the following entry, on a single line: This creates a new user with username, admin , password, YourPassword , and the admin role. Create the jbossweb.keystore file Open a command prompt and make sure you are in the etc/ directory of your Karaf installation: At the command line, using a -dname value (Distinguished Name) appropriate for your application, type this command: Important Type the entire command on a single command line. The command returns output that looks like this: Check whether InstallDir /etc now contains the file, jbossweb.keystore . Create and deploy the keystore.xml file Using your favorite XML editor, create and save the keystore.xml file in the <installDir> /jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/etc directory. Include this text in the file: Deploy the keystore.xml file to the Karaf container, by copying it into the InstallDir /deploy directory (the hot deploy directory). Note Subsequently, if you need to undeploy the keystore.xml file, you can do so by deleting the keystore.xml file from the deploy/ directory while the Karaf container is running . Add the required properties to org.apache.karaf.management.cfg Edit the InstallDir /etc/org.apache.karaf.management.cfg file to include these properties at the end of the file: Important You must set secureProtocol to TLSv1 , in order to protect against the Poodle vulnerability (CVE-2014-3566) Note You can optionally set the enabledCipherSuites property to list specific cipher suites to be used for JMX TLS connections. Setting this property will override default cipher suites. Restart the Karaf container You must restart the Karaf container for the new JMX SSL/TLS settings to take effect. Testing the Secure JMX connection Open a command prompt and make sure you are in the etc/ directory of your Fuse installation: Open a terminal, and start up JConsole by entering this command: Where the -J-Djavax.net.ssl.trustStore option specifies the location of the jbossweb.keystore file (make sure this location is specified correctly, or the SSL/TLS handshake will fail). The -J-Djavax.net.debug=ssl setting enables logging of SSL/TLS handshake messages, so you can verify that SSL/TLS has been successfully enabled. Important Type the entire command on the same command line. When JConsole opens, select the option Remote Process in the New Connection wizard. Under the Remote Process option, enter the following value for the service:jmx:<protocol>:<sap> connection URL: And fill in the Username , and Password fields with valid JAAS credentials (as set in the etc/users.properties file): 2.5. Using an Elytron credential store Fuse includes the Elytron credential store feature that is part of JBoss EAP. A credential store can safely secure sensitive text strings by encrypting them in a storage file. Each container can have exactly one credential store. In secure configurations, a typical problem is how to store passwords. For example, consider passwords for database access from various applications. For many authentication methods, passwords must be available in clear text before a server can send credentials to a database server. Storage of clear text passwords in text configuration files is generally not a good idea. An Elytron credential store solves this problem. You securely store passwords and other sensitive values in a credential store, which is an encrypted file that complies with the PKCS#12 specification. A credential store does not store unencrypted values. The credential store uses PBE (Password Based Encryption) to encrypt both sensitive values, such as passwords, and the store itself. The following topics provide details: Section 2.5.1, "Putting a credential store into use" Section 2.5.2, "Behavior when system properties hold credential store configuration" Section 2.5.3, "Description of credential store system properties and environment variables" Section 2.5.4, " credential-store:create command reference" Section 2.5.5, " credential-store:store command reference" Section 2.5.6, " credential-store:list command reference" Section 2.5.7, " credential-store:remove command reference" Section 2.5.8, "Example of Configuration Admin properties enabling credential store use" 2.5.1. Putting a credential store into use In an Apache Karaf container that is running Fuse, to put a credential store into use, create and configure the credential store and then add values to it. Fuse continues to run and the credential store is available for use. Prerequisites You want to use the following defaults when you create the credential store: Create a PKCS#12 credential store. Apply the masked-SHA1-DES-EDE algorithm to encrypt the credential store. Iterate through the algorithm 200000 times. Locate the credential store at USD{karaf.etc}/credential.store.p12 . You want to save credential store configuration in USD{karaf.etc}/system.properties . If you need to change any of these behaviors, see the information about invoking the credential-store:create command . Procedure Choose a credential store password. Later, when you add values to the credential store or when you want to decrypt values, a credential store command uses your credential store password to encrypt and decrypt the values. Invoke the credential-store:create command, which prompts you to enter your chosen credential store password: The command writes something like the following configuration in etc/system.properties : Add an encrypted value to the credential store by invoking the credential-store:store command as follows: credential-store:store alias Replace alias with a unique key value. Later, to retrieve the encrypted value that you are adding to the credential store, tools use this alias. For example, suppose you use the db.password system property in code, and your etc/system.properties file has an entry that sets the db.password property to the actual password for the database. The recommendation is to specify your system property, db.password , as the alias. After you invoke this command, it prompts you to enter and confirm the sensitive value that you want to add to the credential store. Continuing with the db.password alias example, at the prompt, you would enter the actual password for the database: Update an entry in your etc/system.properties file or add a new entry. The entry that you update or add sets the alias that you specified in the credential-store:store command to the reference value that the command outputs. For example: When Fuse is running with a configured credential store, it dynamically replaces each instance of, for example, the db.password system property, with the actual secret value that is in the credential store. In the credential-store:store command, if the alias that you specified is a system property that is already in use, skip to the step. If code is not already using the alias that you specified for the secret, then in each file that requires the secret, specify the alias, which you added as a system property in the step. For example, code would refer to db.password . Repeat the three steps for each value that you want to add to the credential store. Results The credential store is ready for use. When Fuse starts or when the credential store bundle restarts, it processes system properties to find any that reference credential store entries. For each system property that does, Fuse obtains the associated value from the credential store and replaces the system property with the actual secret value. The actual secret value is then available to all components, bundles, and code that contain instances of that system property. 2.5.2. Behavior when system properties hold credential store configuration Suppose that a credential store is in use and you are using system properties to hold its configuration parameters. When Fuse starts it processes all system properties. Fuse replaces system properties that are set to values that have the CS: prefix with the associated value that is in the credential store. Fuse proxies the java.lang:type=Runtime JMX MBean so that each call to the JMX getSystemProperties() method hides decrypted values. For example, consider a credential store with one entry: Assume that after you added this entry to the credential store, you edited the etc/system.properties file to add this entry: db.password = CS:db.password When Fuse starts or when you restart the org.jboss.fuse.modules.fuse-credential-store-core bundle, Fuse checks for any references to the db.password system property. For each reference, Fuse uses the CS:db.password alias to obtain the associated value from credential store. You can check this by invoking the following command: However, if you use JMX to check this, the value from the credential store is hidden: 2.5.3. Description of credential store system properties and environment variables You can use system properties or environment variables to hold credential store configuration parameters. The options that you specify when you create a credential store determine: Whether you must set the properties or variables yourself. The exact value that a property or variable is, or must be, set to. An understanding of the properties/variables helps you understand how a credential store works. When you invoke the credential-store:create command and specify only the --persist option, the command sets system properties to credential store configuration parameters. You do not need to explicitly set credential store system properties. To use credential store environment variables instead or to change the default behavior of the credential-store:create command, see credential-store:create command reference for details about options that you can specify when you create a credential store. When you invoke the command that creates a credential store, any options that you specify determine the settings of the credential store properties or variables. If you must set properties or variables yourself, output from the credential-store:create command contains instructions for doing that. In other words, it is never up to you to decide what the setting of credential store system properties or environment variables should be. Execution of the credential-store:create command always determines the settings. The following table describes the credential store properties and variables. For a particular parameter, if both the environment variable and the system property are set, the environment variable setting has precedence. Name Description Environment variable: CREDENTIAL_STORE_PROTECION_ALGORITHM System property: credential.store.protection.algorithm The Password Based Encryption (PBE) algorithm that credential store commands use to derive an encryption key. Environment variable: CREDENTIAL_STORE_LOCATION System property: credential.store.location Location of the credential store. Environment variable: CREDENTIAL_STORE_PROTECTION_PARAMS System property: credential.store.protection.params Parameters that a credential store uses to derive an encryption key. Parameters include iteration count, initial vector, and salt. Environment variable: CREDENTIAL_STORE_PROTECTION System property: credential.store.protection Password that a credential store command must decrypt to recover passwords or other secure data from a credential store. When you invoke the credential-store:create command, the command prompts you to specify a password. The encryption of that password is the setting of this environment variable or system property. 2.5.4. credential-store:create command reference To create and configure a credential store, invoke the credential-store:create command, which has the following format: credential-store:create [ options ] When you do not specify any options, the command does the following: Prompts you for your chosen credential store password. Creates a PKCS#12 credential store Use the masked-SHA1-DES-EDE algorithm to encrypt the credential store Iterates through the algorithm 200000 times Locates the credential store at USD{karaf.etc}/credential.store.p12 Does not store the credential store configuration The following table describes the options, which you can specify to change the default behavior. Option Description -w or --password-property Follow this option with an environment variable or a system property that is set to the value of your master password. The credential store uses this value, in conjunction with an algorithm, to derive the encryption or decryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. For example: -w MASTER_PW -W or --password Follow this option with the plain text value of your chosen master password. The plain text value of your master password appears in history. The credential store uses this value, in conjunction with an algorithm, to derive the encryption or decryption key. If you do not specify the -w or the -W option, after you invoke the command, it prompts you to enter and confirm your master password. For example: -W "M@s!erP#" -f or --force Forces creation of the credential store. If a credential store exists at the intended location of the new credential store, specification of this option causes the command to overwrite the existing credential store. Any content in the existing credential store is lost. The default behavior is that the command does not create a credential store if there is already a credential store in the intended location. -l or --location Specifies the location for the new credential store. The recommendation is to use the default location, which is USD{karaf.etc}/credential.store.p12 . -ic or --iteration-count Follow this option with an integer that indicates the number of times to iteratively apply the encryption algorithm being used. Each iteration takes the result and applies the algorithm again. The result is the final masked password. The default is 200000. -a or --algorithm Follow this option with the identifier for the algorithm that you want the credential-store:create command to use to generate the masked password. The recommendation is to use the default, which is masked-SHA1-DES-EDE . -p or --persist Stores the configuration of the new credential store in USD{karaf.etc}/system.properties . If you do not specify this option, the credential-store:create command sends the configuration information to the console with instructions for what to do . See the example after this table. A reason to omit this option is because you want to see the credential store configuration parameter values. Or, you might omit this option because you plan to pass credential store configuration parameters to an application without using the etc/system.properties file. --help Displays information about command syntax and options. Example of creating a credential store without specifying --persist The following command creates a credential store but does not save the credential store configuration in USD{karaf.etc}/system.properties . The command uses the masked-SHA1-DES-EDE algorithm, which is the default. 2.5.5. credential-store:store command reference To add an encrypted value to the credential store, invoke the credential-store:store command, which has the following format: credential-store:store alias [ secret ] Replace alias with a unique key value. To retrieve the encrypted value that you are adding to the credential store, tools use this alias. Optionally, replace secret with the value that you want to encrypt and add to the credential store. Typically, this is a password, but it can be any value that you want to encrypt. If you specify secret on the command line, its plain text value appears in history. If you do not specify secret on the command line, then the command prompts you for it and the value does not appear in history. To view information about the command, enter: credential-store:store --help . The following command line is an example of adding an entry to the credential store: The credential store now has an entry that can be referenced by specifying CS:db.password . 2.5.6. credential-store:list command reference To obtain the alias for an entry in the credential store, invoke the credential-store:list command, which display a list of all entries in the credential store. For example: To also list decryptions of the secret values that are encrypted in the credential store, invoke the command as follows: To view information about the command: 2.5.7. credential-store:remove command reference To remove an entry from a credential store, invoke the credential-store:remove command, which has the following format: credential-store:remove alias Replace alias with the unique key value that you specified for the alias argument when you added the entry to the credential store. Do not specify the CS: prefix. You can invoke the credential-store:list command to obtain the alias. The credential-store:remove command checks the credential store for an entry that has the alias that you specified, and if found, removes it. For example: To view information about the command: karaf@root()> credential-store:remove --help 2.5.8. Example of Configuration Admin properties enabling credential store use In a development environment, you can use Configuration Admin service properties to enable the use of a credential store. Configuration Admin properties are defined in etc/*.cfg files. Important The use of Configuration Admin properties to enable the use of a credential store is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Preparation Invoke the credential-store:create command to create a credential store. See credential-store:create command reference . Enable the use of Configuration Admin properties by editing the etc/config.properties file to uncomment the line that contains felix.cm.pm = elytron : What happens when Fuse starts The felix.configadmin bundle: Delays registering the ConfigurationAdmin service because the felix.cm.pm property is set. Waits for the availability of the org.apache.felix.cm.PersistenceManagerOSGi service with the name=cm OSGi service registration property. The Fuse credential store bundle: Loads the credential store by using the values set for the credential.store. * system properties or CREDENTIAL_STORE_ * environment variables. Registers an OSGi service that implements the the org.apache.felix.cm.PersistenceManagerOSGi service. If anything fails, the credential store bundle registers the PersistenceManager service, which does nothing special. When something is broken or when the credential store is not available, Fuse should be able to read unencrypted configuration values. Encrypted values, specified with the CS: prefix are lost unless you remember the original values or you are able to recover the credential store and its configuration. The felix.configadmin process uses the new persistence manager service to load and store the credential store configuration. Example Suppose the credential store has two entries: In a Configuration Admin service configuration, you choose to use the alias for a sensitive value instead of the actual value. For example, you change a web configuration property as follows: In logs, the actual value, 8182 can appear, as you can see at the end of the following line. Whether a log shows the actual text value is determined by the component that consumes the encrypted value. In the commands, the second config:property-list --pid org.ops4j.pax.web command displays CS:http.port instead of 8182 , though the property has a numeric value. The pax-web-undertow process starts on this port. This is because OSGi hooks prevent the felix.fileinstall process, which shows the output of the config:property-list --pid org.ops4j.pax.web command, from seeing decrypted (dereferenced) values. This is also the reason why the etc/org.ops4j.pax.web.cfg file does not store decrypted (dereferenced) values, but instead stores, for example: | [
"Index │ Realm Name │ Login Module Class Name ──────┼────────────┼─────────────────────────────────────────────────────────────── 1 │ karaf │ org.apache.karaf.jaas.modules.properties.PropertiesLoginModule 2 │ karaf │ org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule 3 │ karaf │ org.apache.karaf.jaas.modules.audit.FileAuditLoginModule 4 │ karaf │ org.apache.karaf.jaas.modules.audit.LogAuditLoginModule 5 │ karaf │ org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModule",
"Username = Password [, UserGroup | Role ][, UserGroup | Role ]",
"jdoe=topsecret,admin",
"_g_\\: GroupName = Role1 , Role2 ,",
"_g_\\:admingroup=group,admin",
"majorclanger=secretpass,_g_:admingroup",
"Username = PublicKey [, UserGroup | Role ][, UserGroup | Role ]",
"jdoe=AAAAB3NzaC1kc3MAAACBAP1/U4EddRIpUt9KnC7s5Of2EbdSPO9EAMMeP4C2USZpRV1AIlH7WT2NWPq/xfW6MPbLm1Vs14E7gB00b/JmYLdrmVClpJ+f6AR7ECLCT7up1/63xhv4O1fnfqimFQ8E+4P208UewwI1VBNaFpEy9nXzrith1yrv8iIDGZ3RSAHHAAAAFQCXYFCPFSMLzLKSuYKi64QL8Fgc9QAAAnEA9+GghdabPd7LvKtcNrhXuXmUr7v6OuqC+VdMCz0HgmdRWVeOutRZT+ZxBxCBgLRJFnEj6EwoFhO3zwkyjMim4TwWeotifI0o4KOuHiuzpnWRbqN/C/ohNWLx+2J6ASQ7zKTxvqhRkImog9/hWuWfBpKLZl6Ae1UlZAFMO/7PSSoAAACBAKKSU2PFl/qOLxIwmBZPPIcJshVe7bVUpFvyl3BbJDow8rXfskl8wO63OzP/qLmcJM0+JbcRU/53Jj7uyk31drV2qxhIOsLDC9dGCWj47Y7TyhPdXh/0dthTRBy6bqGtRPxGa7gJov1xm/UuYYXPIUR/3x9MAZvZ5xvE0kYXO+rx,admin",
"_g_\\: GroupName = Role1 , Role2 ,",
"_g_\\:admingroup=group,admin",
"jdoe=AAAAB3NzaC1kc3MAAACBAP1/U4EddRIpUt9KnC7s5Of2EbdSPO9EAMMeP4C2USZpRV1AIlH7WT2NWPq/xfW6MPbLm1Vs14E7gB00b/JmYLdrmVClpJ+f6AR7ECLCT7up1/63xhv4O1fnfqimFQ8E+4P208UewwI1VBNaFpEy9nXzrith1yrv8iIDGZ3RSAHHAAAAFQCXYFCPFSMLzLKSuYKi64QL8Fgc9QAAAnEA9+GghdabPd7LvKtcNrhXuXmUr7v6OuqC+VdMCz0HgmdRWVeOutRZT+ZxBxCBgLRJFnEj6EwoFhO3zwkyjMim4TwWeotifI0o4KOuHiuzpnWRbqN/C/ohNWLx+2J6ASQ7zKTxvqhRkImog9/hWuWfBpKLZl6Ae1UlZAFMO/7PSSoAAACBAKKSU2PFl/qOLxIwmBZPPIcJshVe7bVUpFvyl3BbJDow8rXfskl8wO63OzP/qLmcJM0+JbcRU/53Jj7uyk31drV2qxhIOsLDC9dGCWj47Y7TyhPdXh/0dthTRBy6bqGtRPxGa7gJov1xm/UuYYXPIUR/3x9MAZvZ5xvE0kYXO+rx,_g_:admingroup",
"encryption.enabled = true encryption.name = basic encryption.prefix = {CRYPT} encryption.suffix = {CRYPT} encryption.algorithm = MD5 encryption.encoding = hexadecimal",
"xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\"",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\"> <jaas:config name=\" JaasRealmName \" rank=\" IntegerRank \"> <jaas:module className=\" LoginModuleClassName \" flags=\"[required|requisite|sufficient|optional]\"> Property = Value </jaas:module> <!-- Can optionally define multiple modules --> </jaas:config> </blueprint>",
"PropertiesLogin { org.apache.activemq.jaas.PropertiesLoginModule required org.apache.activemq.jaas.properties.user=\"users.properties\" org.apache.activemq.jaas.properties.group=\"groups.properties\"; };",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"PropertiesLogin\"> <jaas:module flags=\"required\" className=\"org.apache.activemq.jaas.PropertiesLoginModule\"> org.apache.activemq.jaas.properties.user=users.properties org.apache.activemq.jaas.properties.group=groups.properties </jaas:module> </jaas:config> </blueprint>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"LDAPLogin\" rank=\"200\"> <jaas:module flags=\"required\" className=\"org.apache.karaf.jaas.modules.ldap.LDAPLoginModule\"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.username=uid=admin,ou=system connection.password=secret connection.protocol= connection.url = ldap://localhost:10389 user.base.dn = ou=users,ou=system user.filter = (uid=%u) user.search.subtree = true role.base.dn = ou=users,ou=system role.filter = (uid=%u) role.name.attribute = ou role.search.subtree = true authentication = simple </jaas:module> </jaas:config> </blueprint>",
"Username = Password [, UserGroup | Role ][, UserGroup | Role ]",
"_g_\\: GroupName = Role1 [, Role2 ]",
"Users bigcheese=cheesepass,_g_:admingroup guest=guestpass,_g_:guestgroup Groups _g_\\:admingroup=group,admin _g_\\:guestgroup=viewer",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <type-converters> <bean class=\"org.apache.karaf.jaas.modules.properties.PropertiesConverter\"/> </type-converters> <!--Allow usage of System properties, especially the karaf.base property--> <ext:property-placeholder placeholder-prefix=\"USD[\" placeholder-suffix=\"]\"/> <jaas:config name=\"karaf\" rank=\"200\" > <jaas:module flags=\"required\" className=\"org.apache.karaf.jaas.modules.properties.PropertiesLoginModule\"> users= USD[karaf.base]/etc/users.properties </jaas:module> </jaas:config> <!-- The Backing Engine Factory Service for the PropertiesLoginModule --> <service interface=\"org.apache.karaf.jaas.modules.BackingEngineFactory\"> <bean class=\"org.apache.karaf.jaas.modules.properties.PropertiesBackingEngineFactory\"/> </service> </blueprint>",
"InstallDir /etc/ PersistentID .cfg",
"Username = Password [, Role ][, Role ]",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"karaf\" rank=\"200\" > <jaas:module flags=\"required\" className=\"org.apache.karaf.jaas.modules.osgi.OsgiConfigLoginModule\"> pid = org.jboss.example.osgiconfigloginmodule </jaas:module> </jaas:config> </blueprint>",
"Username = PublicKey [, UserGroup | Role ][, UserGroup | Role ]",
"jdoe=AAAAB3NzaC1kc3MAAACBAP1/U4EddRIpUt9KnC7s5Of2EbdSPO9EAMMeP4C2USZpRV1AIlH7WT2NWPq/xfW6MPbLm1Vs14E7gB00b/JmYLdrmVClpJ+f6AR7ECLCT7up1/63xhv4O1fnfqimFQ8E+4P208UewwI1VBNaFpEy9nXzrith1yrv8iIDGZ3RSAHHAAAAFQCXYFCPFSMLzLKSuYKi64QL8Fgc9QAAAnEA9+GghdabPd7LvKtcNrhXuXmUr7v6OuqC+VdMCz0HgmdRWVeOutRZT+ZxBxCBgLRJFnEj6EwoFhO3zwkyjMim4TwWeotifI0o4KOuHiuzpnWRbqN/C/ohNWLx+2J6ASQ7zKTxvqhRkImog9/hWuWfBpKLZl6Ae1UlZAFMO/7PSSoAAACBAKKSU2PFl/qOLxIwmBZPPIcJshVe7bVUpFvyl3BbJDow8rXfskl8wO63OzP/qLmcJM0+JbcRU/53Jj7uyk31drV2qxhIOsLDC9dGCWj47Y7TyhPdXh/0dthTRBy6bqGtRPxGa7gJov1xm/UuYYXPIUR/3x9MAZvZ5xvE0kYXO+rx,admin",
"_g_\\: GroupName = Role1 [, Role2 ]",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <!--Allow usage of System properties, especially the karaf.base property--> <ext:property-placeholder placeholder-prefix=\"USD[\" placeholder-suffix=\"]\"/> <jaas:config name=\"karaf\" rank=\"200\" > <jaas:module flags=\"required\" className=\"org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule\"> users = USD[karaf.base]/etc/keys.properties </jaas:module> </jaas:config> </blueprint>",
"osgi: ServiceInterfaceName [/ ServicePropertiesFilter ]",
"CREATE TABLE users ( username VARCHAR(255) NOT NULL, password VARCHAR(255) NOT NULL, PRIMARY KEY (username) ); CREATE TABLE roles ( username VARCHAR(255) NOT NULL, role VARCHAR(255) NOT NULL, PRIMARY KEY (username,role) );",
"<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"mysqlDatasource\" class=\"com.mysql.jdbc.jdbc2.optional.MysqlDataSource\"> <property name=\"serverName\" value=\"localhost\"></property> <property name=\"databaseName\" value=\" DBName \"></property> <property name=\"port\" value=\"3306\"></property> <property name=\"user\" value=\" DBUser \"></property> <property name=\"password\" value=\" DBPassword \"></property> </bean> <service id=\"mysqlDS\" interface=\" javax.sql.DataSource \" ref=\"mysqlDatasource\"> <service-properties> <entry key=\"osgi.jndi.service.name\" value=\"jdbc/karafdb\"/> </service-properties> </service> </blueprint>",
"osgi:javax.sql.DataSource/(osgi.jndi.service.name=jdbc/karafdb)",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <!--Allow usage of System properties, especially the karaf.base property--> <ext:property-placeholder placeholder-prefix=\"USD[\" placeholder-suffix=\"]\"/> <jaas:config name=\"karaf\" rank=\"200\"> <jaas:module flags=\"required\" className=\"org.apache.karaf.jaas.modules.jdbc.JDBCLoginModule\"> datasource = osgi:javax.sql.DataSource/(osgi.jndi.service.name=jdbc/karafdb) query.password = SELECT password FROM users WHERE username=? query.role = SELECT role FROM roles WHERE username=? insert.user = INSERT INTO users VALUES(?,?) insert.role = INSERT INTO roles VALUES(?,?) delete.user = DELETE FROM users WHERE username=? delete.role = DELETE FROM roles WHERE username=? AND role=? delete.roles = DELETE FROM roles WHERE username=? </jaas:module> </jaas:config> <!-- The Backing Engine Factory Service for the JDBCLoginModule --> <service interface=\"org.apache.karaf.jaas.modules.BackingEngineFactory\"> <bean class=\"org.apache.karaf.jaas.modules.jdbc.JDBCBackingEngineFactory\"/> </service> </blueprint>",
"connection.url=ldap://10.0.0.153:2389 ldap://10.10.178.20:389",
"ldap-group = jaas-role (, jaas-role )*(; ldap-group = jaas-role (, jaas-role )*)*",
"role.mapping=admin=admin;devop=admin,manager;tester=viewer",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"karaf\" rank=\"100\"> <jaas:module className=\"org.apache.karaf.jaas.modules.ldap.LDAPLoginModule\" flags=\"sufficient\"> debug=true <!-- LDAP Configuration --> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory <!-- multiple LDAP servers can be specified as a space separated list of URLs --> connection.url=ldap://10.0.0.153:2389 ldap://10.10.178.20:389 <!-- authentication=none --> authentication=simple connection.username=cn=Directory Manager connection.password=directory <!-- User Info --> user.base.dn=dc=redhat,dc=com user.filter=(&(objectClass=InetOrgPerson)(uid=%u)) user.search.subtree=true <!-- Role/Group Info--> role.base.dn=dc=redhat,dc=com role.name.attribute=cn <!-- The 'dc=redhat,dc=com' used in the role.filter below is the user.base.dn. --> <!-- role.filter=(uniquemember=%dn,dc=redhat,dc=com) --> role.filter=(&(objectClass=GroupOfUniqueNames)(UniqueMember=%fqdn)) role.search.subtree=true <!-- role mappings - a ';' separated list --> role.mapping=JBossAdmin=admin;JBossMonitor=viewer <!-- LDAP context properties --> context.com.sun.jndi.ldap.connect.timeout=5000 context.com.sun.jndi.ldap.read.timeout=5000 <!-- LDAP connection pooling --> <!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html --> <!-- http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html --> context.com.sun.jndi.ldap.connect.pool=true <!-- How are LDAP referrals handled? Can be `follow`, `ignore` or `throw`. Configuring `follow` may not work on all LDAP servers, `ignore` will silently ignore all referrals, while `throw` will throw a partial results exception if there is a referral. --> context.java.naming.referral=ignore <!-- SSL configuration --> ssl=false ssl.protocol=SSL <!-- matches the keystore/truststore configured below --> ssl.truststore=ks ssl.algorithm=PKIX <!-- The User and Role caches can be disabled - 6.3.0 179 and later --> disableCache=true </jaas:module> </jaas:config> <!-- Location of the SSL truststore/keystore <jaas:keystore name=\"ks\" path=\"file:///USD{karaf.home}/etc/ldap.truststore\" keystorePassword=\"XXXXXX\" /> --> </blueprint>",
"user.filter=(&(objectClass=InetOrgPerson)(uid=%u)) role.filter=(uniquemember=%fqdn)",
"user.filter=(&(objectCategory=person)(samAccountName=%u)) role.filter=(uniquemember=%fqdn)",
"user.filter=(uid=%u) role.filter=(member=uid=%u)",
"user.filter=(uid=%u) role.filter=(member:=uid=%u)",
"Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = USD{karaf.data}/security/audit.log log4j2.appender.audit.filePattern = USD{karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = USD{log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB",
"log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile",
"audit.log.enabled = true audit.log.logger = <logger.name> audit.log.level = <level>",
"audit.log.enabled = true audit.log.logger = org.apache.karaf.jaas.modules.audit audit.log.level = INFO",
"audit.log.enabled = true audit.log.level = INFO audit.log.logger = org.apache.karaf.jaas.modules.audit encryption.algorithm = MD5 encryption.enabled = false encryption.encoding = hexadecimal encryption.name = encryption.prefix = {CRYPT} encryption.suffix = {CRYPT}",
"audit.file.enabled = true audit.file.file = USD{karaf.data}/security/audit.log",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <service interface=\"org.apache.karaf.jaas.modules.EncryptionService\"> <service-properties> <entry key=\"name\" value=\"jasypt\" /> </service-properties> <bean class=\"org.apache.karaf.jaas.jasypt.impl.JasyptEncryptionService\"/> </service> </blueprint>",
"JBossA-MQ:karaf@root> features:install jasypt-encryption",
"karaf@root> features:install jasypt-encryption",
"# Boolean enabling / disabling encrypted passwords # encryption.enabled = true # Encryption Service name the default one is 'basic' a more powerful one named 'jasypt' is available when installing the encryption feature # encryption.name = jasypt # Encryption prefix # encryption.prefix = {CRYPT} # Encryption suffix # encryption.suffix = {CRYPT} # Set the encryption algorithm to use in Karaf JAAS login module Supported encryption algorithms follow: MD2 MD5 SHA-1 SHA-256 SHA-384 SHA-512 # encryption.algorithm = SHA-256",
"karaf@root()> jaas:realms Index │ Realm Name │ Login Module Class Name ──────┼────────────┼─────────────────────────────────────────────────────────────── 1 │ karaf │ org.apache.karaf.jaas.modules.properties.PropertiesLoginModule 2 │ karaf │ org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule 3 │ karaf │ org.apache.karaf.jaas.modules.audit.FileAuditLoginModule 4 │ karaf │ org.apache.karaf.jaas.modules.audit.LogAuditLoginModule 5 │ karaf │ org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModule",
"karaf@root()> jaas:realm-manage --index 1 karaf@root()> jaas:user-list User Name │ Group │ Role ──────────┼────────────┼────────────── admin │ admingroup │ admin admin │ admingroup │ manager admin │ admingroup │ viewer admin │ admingroup │ systembundles admin │ admingroup │ ssh karaf@root()> jaas:useradd usertest test123 karaf@root()> jaas:group-add usertest admingroup karaf@root()> jaas:update karaf@root()> jaas:realm-manage --index 1 karaf@root()> jaas:user-list User Name │ Group │ Role ──────────┼────────────┼────────────── admin │ admingroup │ admin admin │ admingroup │ manager admin │ admingroup │ viewer admin │ admingroup │ systembundles admin │ admingroup │ ssh usertest │ admingroup │ admin usertest │ admingroup │ manager usertest │ admingroup │ viewer usertest │ admingroup │ systembundles usertest │ admingroup │ ssh",
"admin = {CRYPT}WXX+4PM2G7nT045ly4iS0EANsv9H/VwmStGIb9bcbGhFH5RgMuL0D3H/GVTigpga{CRYPT},_g_:admingroup _g_\\:admingroup = group,admin,manager,viewer,systembundles,ssh usertest = {CRYPT}33F5E76E5FF97F3D27D790AAA1BEE36057410CCDBDBE2C792239BB2853D17654315354BB8B608AD5{CRYPT},_g_:admingroup",
"CamelInstallDir/examples/camel-example-servlet-rest-karaf-jaas",
"mvn install",
"cp src/main/resources/org.ops4j.pax.web.context-camelrestdsl.cfg USDKARAF_HOME/etc",
"feature:repo-add camel USD{project.version} feature:install camel",
"feature:install camel-servlet feature:install camel-jackson feature:install war",
"install -s mvn:org.apache.camel.example/camel-example-servlet-rest-karaf-jaas/USD{project.version}",
"log:tail",
"http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/123",
"http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/findAll",
"curl -X GET -H \"Accept: application/json\" --basic -u admin:admin http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/123",
"curl -X GET -H \"Accept: application/json\" --basic -u admin:admin http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/findAll",
"curl -X PUT -d \"{ \\\"id\\\": 234, \\\"name\\\": \\\"John Smith\\\"}\" -H \"Accept: application/json\" --basic -u admin:admin http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user",
"admin = secretpass,group,admin,manager,viewer,systembundles,ssh",
"admin = secretpass, _g_:admingroup _g_\\:admingroup = group,admin,manager,viewer,systembundles,ssh",
"etc/auth/jmx.acl[.*].cfg",
"jmx.acl.org.apache.camel.context",
"etc/auth/jmx.acl.org.apache.camel.context.cfg",
"Pattern = Role1 [, Role2 ][, Role3 ]",
"getResourceURL = admin, manager, viewer getLoadClassOrigin = admin, manager, viewer",
"set* = admin, manager, viewer",
"create(java.lang.String)[/jmx[.]acl.*/] = admin create(java.lang.String)[/org[.]apache[.]karaf[.]command[.]acl.+/] = admin create(java.lang.String)[/org[.]apache[.]karaf[.]service[.]acl.+/] = admin create(java.lang.String) = admin, manager",
"jmx.acl.org.apache.cxf.Bus jmx.acl.org.apache.cxf jmx.acl.org.apache jmx.acl.org jmx.acl",
"list* = admin, manager, viewer get* = admin, manager, viewer is* = admin, manager, viewer set* = admin * = admin",
"is* = admin, manager, viewer get* = admin, manager, viewer set* = admin, manager",
"jmx.acl.org.example.MyMBean.cfg",
"etc/auth/org.apache.karaf.command.acl. CommandScope .cfg",
"Pattern = Role1 [, Role2 ][, Role3 ]",
"list = admin, manager, viewer repo-list = admin, manager, viewer info = admin, manager, viewer version-list = admin, manager, viewer repo-refresh = admin, manager repo-add = admin, manager repo-remove = admin, manager install = admin uninstall = admin",
"start[/.*[-][f].*/] = admin start = admin, manager stop[/.*[-][f].*/] = admin stop = admin, manager",
"service.guard = (objectClass= InterfaceName )",
"Pattern = Role1 [, Role2 ][, Role3 ]",
"package org.example; public interface MyService { void doit(String s); }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" default-activation=\"lazy\"> <bean id=\"myserviceimpl\" class=\"org.example.MyServiceImpl\"/> <service id=\"myservice\" ref=\"myserviceimpl\" interface=\"org.example.MyService\"/> </blueprint>",
"etc/auth/org.apache.karaf.service.acl.myservice.cfg",
"service.guard = (objectClass= InterfaceName ) Pattern = Role1 [, Role2 ][, Role3 ]",
"service.guard = (objectClass=org.example.MyService) doit = admin, manager, viewer",
"karaf.secured.services=(|(objectClass= ServiceInterface )( ...ExistingPropValue... ))",
"karaf.secured.services=(|(objectClass=org.example.MyService)(&(osgi.command.scope=*)(osgi.command.function=*)))",
"// Java import javax.security.auth.Subject; import org.apache.karaf.jaas.boot.principal.RolePrincipal; // Subject s = new Subject(); s.getPrincipals().add(new RolePrincipal(\"Deployer\")); Subject.doAs(s, new PrivilegedAction() { public Object run() { svc.doit(\"foo\"); // invoke the service } }",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:enc=\"http://karaf.apache.org/xmlns/jasypt/v1.0.0\"> <enc:property-placeholder> <enc:encryptor class=\"org.jasypt.encryption.pbe.StandardPBEStringEncryptor\"> <property name=\"config\"> <bean class=\"org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig\"> <property name=\"algorithm\" value=\"PBEWithMD5AndDES\" /> <property name=\"password\" value=\"myPassword\" /> </bean> </property> </enc:encryptor> </enc:property-placeholder> </blueprint>",
"karaf@root()> jasypt:list-algorithms",
"karaf@root()> jasypt:list-algorithms DIGEST ALGORITHMS: - 1.0.10118.3.0.55 - 1.2.804.2.1.1.1.1.2.2.1 - 2.16.840.1.101.3.4.2.9 - BLAKE2B-160 - BLAKE2B-256 - MD4 - MD5 - OID.1.0.10118.3.0.55 - SHA3-512 - SKEIN-1024-1024 - SKEIN-1024-384 - TIGER - WHIRLPOOL PBE ALGORITHMS: - PBEWITHHMACSHA1ANDAES_128 - PBEWITHHMACSHA1ANDAES_256 - PBEWITHSHA1ANDRC2_128 - PBEWITHSHA1ANDRC2_40 - PBEWITHSHAANDIDEA-CBC - PBEWITHSHAANDTWOFISH-CBC",
"karaf@root()> jasypt:encrypt Master password: ******** Master password (repeat): ******** Data to encrypt: ***** Data to encrypt (repeat): ***** Algorithm used: PBEWithMD5AndDES Encrypted data: oT8/LImAFQmOfXxuFGRDTAjD1l1+GxKL+TnHxFNwX4A=",
"#ldap.properties ldap.password=ENC(VMJ5S566MEDhQ5r6jiIqTB+fao3NN4pKnQ9xU0wiDCg=) ldap.url=ldap://192.168.1.74:10389",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\" xmlns:enc=\"http://karaf.apache.org/xmlns/jasypt/v1.0.0\"> </blueprint>",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\" xmlns:enc=\"http://karaf.apache.org/xmlns/jasypt/v1.0.0\"> <ext:property-placeholder> <ext:location>file:etc/ldap.properties</ext:location> </ext:property-placeholder> <enc:property-placeholder> <enc:encryptor class=\"org.jasypt.encryption.pbe.StandardPBEStringEncryptor\"> <property name=\"config\"> <bean class=\"org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig\"> <property name=\"algorithm\" value=\"PBEWithMD5AndDES\" /> <property name=\"passwordEnvName\" value=\"JASYPT_ENCRYPTION_PASSWORD\" /> </bean> </property> </enc:encryptor> </enc:property-placeholder> ... </blueprint>",
"PBEWITHHMACSHA1ANDAES_128 PBEWITHHMACSHA1ANDAES_256 PBEWITHHMACSHA224ANDAES_128 PBEWITHHMACSHA224ANDAES_256 PBEWITHHMACSHA256ANDAES_128 PBEWITHHMACSHA256ANDAES_256 PBEWITHHMACSHA384ANDAES_128 PBEWITHHMACSHA384ANDAES_256 PBEWITHHMACSHA512ANDAES_128 PBEWITHHMACSHA512ANDAES_256",
"<enc:property-placeholder> <enc:encryptor class=\"org.jasypt.encryption.pbe.StandardPBEStringEncryptor\"> <property name=\"config\"> <bean class=\"org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig\"> <property name=\"algorithm\" value=\"PBEWITHHMACSHA1ANDAES_128\"/> <property name=\"passwordEnvName\" value=\"JASYPT_ENCRYPTION_PASSWORD\"/> <property name=\"ivGenerator\"> <bean class=\"org.jasypt.iv.RandomIvGenerator\" /> </property> </bean> </property> </enc:encryptor> </enc:property-placeholder>",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\" xmlns:enc=\"http://karaf.apache.org/xmlns/jasypt/v1.0.0\"> <ext:property-placeholder> <location>file:etc/ldap.properties</location> </ext:property-placeholder> <enc:property-placeholder> <enc:encryptor class=\"org.jasypt.encryption.pbe.StandardPBEStringEncryptor\"> <property name=\"config\"> <bean class=\"org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig\"> <property name=\"algorithm\" value=\"PBEWithMD5AndDES\" /> <property name=\"passwordEnvName\" value=\"JASYPT_ENCRYPTION_PASSWORD\" /> </bean> </property> </enc:encryptor> </enc:property-placeholder> <jaas:config name=\"karaf\" rank=\"200\"> <jaas:module className=\"org.apache.karaf.jaas.modules.ldap.LDAPLoginModule\" flags=\"required\"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory debug=true connectionURL=USD{ldap.url} connectionUsername=cn=mqbroker,ou=Services,ou=system,dc=jbossfuse,dc=com connectionPassword=USD{ldap.password} connectionProtocol= authentication=simple userRoleName=cn userBase = ou=User,ou=ActiveMQ,ou=system,dc=jbossfuse,dc=com userSearchMatching=(uid={0}) userSearchSubtree=true roleBase = ou=Group,ou=ActiveMQ,ou=system,dc=jbossfuse,dc=com roleName=cn roleSearchMatching= (member:=uid={1}) roleSearchSubtree=true </jaas:module> </jaas:config> </blueprint>",
"export MASTER_PASSWORD=passw0rd",
"karaf@root()> jasypt:encrypt -w MASTER_PASSWORD \"USDenUSD!t!ve\" Algorithm used: PBEWithMD5AndDES Encrypted data: /4DZCwqXD7cQ++TKQjt9QzmmcWv7TwmylCPkHumv2LQ=",
"master.password=passw0rd",
"karaf@root()> jasypt:encrypt -w master.password \"USDenUSD!t!ve\" Algorithm used: PBEWithMD5AndDES Encrypted data: 03+8UTJJtEXxHaJkVCmzhqLMUYtT8TBG2RMvOBQlfmQ=",
"karaf@root()> jasypt:digest Input data to digest: ******** Input data to digest (repeat): ******** Algorithm used: MD5 Digest value: 8D4C0B3D5EE133BCFD7585A90F15C586741F814BC527EAE2A386B9AA6609B926AD9B3C418937251373E08F18729AD2C93815A7F14D878AA0EF3268AA04729A614ECAE95029A112E9AD56FEDD3FD7E28B73291C932B6F4C894737FBDE21AB382",
"karaf@root()> jasypt:digest ImportantPassword",
"karaf@root()> jasypt:digest ImportantPassword Algorithm used: MD5 Digest value: 0bL90nno/nHiTEdzx3dKa61LBDcWQQZMpjaONtY3b1fJBuDWbWTTtZ6tE5eOOPKh7orLTXS7XRt2blA2DrfnjWIlIETjge9n",
"karaf@root()> jasypt:digest --iterations 1000000 --salt-size 32 -a SHA-512 --hex passw0rd Algorithm used: SHA-512 Digest value: 4007A85C4932A399D8376B4F2B3221E34F0AF349BB152BEAC80F03BEB2B368DA7900F0990C186DB36D61741FA147B96DC9F73481991506FAA3662EA1693642CDAB89EB7E6B1DC21E1443D06D70A5842EB2851D37E262D5FC77A1D0909B3B2783",
"karaf@root()> jasypt:decrypt Master password: ******** Data to decrypt: ******************************************** Algorithm used: PBEWithMD5AndDES Decrypted data: USDenUSD!t!ve",
"export MASTER_PASSWORD=passw0rd",
"karaf@root()> jasypt:decrypt -a -w MASTER_PASSWORD Data to decrypt: ******************************************** Algorithm used: PBEWithMD5AndDES Decrypted data: USDenUSD!t!ve",
"master.password=passw0rd",
"karaf@root()> jasypt:decrypt -w master.password Data to decrypt: ******************************************** Algorithm used: PBEWithMD5AndDES Decrypted data: USDenUSD!t!ve",
"admin= YourPassword ,admin",
"cd etc",
"USDJAVA_HOME/bin/keytool -genkey -v -alias jbossalias -keyalg RSA -keysize 1024 -keystore jbossweb.keystore -validity 3650 -keypass JbossPassword -storepass JbossPassword -dname \"CN=127.0.0.1, OU=RedHat Software Unit, O=RedHat, L=Boston, S=Mass, C=USA\"",
"Generating 1,024 bit RSA key pair and self-signed certificate (SHA256withRSA) with a validity of 3,650 days for: CN=127.0.0.1, OU=RedHat Software Unit, O=RedHat, L=Boston, ST=Mass, C=USA New certificate (self-signed): [ [ Version: V3 Subject: CN=127.0.0.1, OU=RedHat Software Unit, O=RedHat, L=Boston, ST=Mass, C=USA Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11 Key: Sun RSA public key, 1024 bits modulus: 1123086025790567043604962990501918169461098372864273201795342440080393808 1594100776075008647459910991413806372800722947670166407814901754459100720279046 3944621813738177324031064260382659483193826177448762030437669318391072619867218 036972335210839062722456085328301058362052369248473659880488338711351959835357 public exponent: 65537 Validity: [From: Thu Jun 05 12:19:52 EDT 2014, To: Sun Jun 02 12:19:52 EDT 2024] Issuer: CN=127.0.0.1, OU=RedHat Software Unit, O=RedHat, L=Boston, ST=Mass, C=USA SerialNumber: [ 4666e4e6] Certificate Extensions: 1 [1]: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: AC 44 A5 F2 E6 2F B2 5A 5F 88 FE 69 60 B4 27 7D .D.../.Z_..i`.'. 0010: B9 81 23 9C ..#. ] ] ] Algorithm: [SHA256withRSA] Signature: 0000: 01 1D 95 C0 F2 03 B0 FD CF 3A 1A 14 F5 2E 04 E5 .........:... 0010: DD 18 DD 0E 24 60 00 54 35 AE FE 36 7B 38 69 4C ....USD`.T5..6.8iL 0020: 1E 85 0A AF AE 24 1B 40 62 C9 F4 E5 A9 02 CD D3 .....USD.@b.... 0030: 91 57 60 F6 EF D6 A4 84 56 BA 5D 21 11 F7 EA 09 .W`.....V.]!. 0040: 73 D5 6B 48 4A A9 09 93 8C 05 58 91 6C D0 53 81 s.kHJ.....X.l.S. 0050: 39 D8 29 59 73 C4 61 BE 99 13 12 89 00 1C F8 38 9.)Ys.a........8 0060: E2 BF D5 3C 87 F6 3F FA E1 75 69 DF 37 8E 37 B5 ...<..?..ui.7.7. 0070: B7 8D 10 CC 9E 70 E8 6D C2 1A 90 FF 3C 91 84 50 .....p.m....<..P ] [Storing jbossweb.keystore]",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\"> <jaas:keystore name=\"sample_keystore\" rank=\"1\" path=\"file:etc/jbossweb.keystore\" keystorePassword=\"JbossPassword\" keyPasswords=\"jbossalias=JbossPassword\" /> </blueprint>",
"secured = true secureProtocol = TLSv1 keyAlias = jbossalias keyStore = sample_keystore trustStore = sample_keystore",
"cd <installDir> /jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/etc",
"jconsole -J-Djavax.net.debug=ssl -J-Djavax.net.ssl.trustStore=jbossweb.keystore -J-Djavax.net.ssl.trustStoreType=JKS -J-Djavax.net.ssl.trustStorePassword=JbossPassword",
"service:jmx:rmi://localhost:44444/jndi/rmi://localhost:1099/karaf-root",
"Username: admin Password: YourPassword",
"karaf@root()> credential-store:create --persist Credential store password: ***** Credential store password (repeat): ***** Credential store configuration was persisted in USD{karaf.etc}/system.properties and is effective. Credential store was written to /data/servers/fuse-karaf-7.4.0.fuse-740060/etc/credential.store.p12 By default, only system properties are encrypted. Encryption of configuration admin properties can be enabled by setting felix.cm.pm=elytron in etc/config.properties.",
"credential.store.location = /data/servers/fuse-karaf-7.4.0.fuse-740060/etc/credential.store.p12 credential.store.protection.algorithm = masked-SHA1-DES-EDE credential.store.protection.params = MDkEKFJId25PaXlVQldKUWw5R2tLclhZQndpTGhhVXJsWG5lNVJMbTFCZEMCAwMNQAQI0Whepb7H1BA= credential.store.protection = m+1BcfRyCnI=",
"karaf@root()> credential-store:store db.password Secret value to store: ****** Secret value to store (repeat): ****** Value stored in the credential store. To reference it use: CS:db.password",
"db.password = CS:db.password",
"karaf@root()> credential-store:list --show-secrets Alias │ Reference │ Secret value ────────────┼────────────────┼───────────── db.password │ CS:db.password │ sec4et",
"karaf@root()> system:property db.password sec4et",
"karaf@root()> credential-store:create Credential store password: ***** Credential store password (repeat): ***** Credential store was written to /data/servers/fuse-karaf-7.4.0.fuse-740060/etc/credential.store.p12 By default, only system properties are encrypted. Encryption of configuration admin properties can be enabled by setting felix.cm.pm=elytron in etc/config.properties. Credential store configuration was not persisted and is not effective. Please use one of the following configuration options and restart Fuse. Option #1: Configure these system properties (e.g., in etc/system.properties): - credential.store.protection.algorithm=masked-SHA1-DES-EDE - credential.store.protection.params=MDkEKGdOSkpRWXpndjhkVVZYbHF4elVpbUszNW0wc3NXczhNS1A5cVlhZzcCAwMNQAQIDPzQ+BDGwX4= - credential.store.protection=0qudlx1XZFM= - credential.store.location=/data/servers/fuse-karaf-7.4.0.fuse-740060/etc/credential.store.p12 Option #2: Configure these environmental variables (e.g., in bin/setenv): - CREDENTIAL_STORE_PROTECTION_ALGORITHM=masked-SHA1-DES-EDE - CREDENTIAL_STORE_PROTECTION_PARAMS=MDkEKGdOSkpRWXpndjhkVVZYbHF4elVpbUszNW0wc3NXczhNS1A5cVlhZzcCAwMNQAQIDPzQ+BDGwX4= - CREDENTIAL_STORE_PROTECTION=0qudlx1XZFM= - CREDENTIAL_STORE_LOCATION=/data/servers/fuse-karaf-7.4.0.fuse-740060/etc/credential.store.p12",
"karaf@root()> credential-store:store db.password sec4et Value stored in the credential store. To reference it use: CS:db.password",
"karaf@root()> credential-store:list Alias │ Reference ─────────────┼─────────────── db.password │ CS:db.password db2.password | CS:db2.password",
"karaf@root()> credential-store:list --show-secrets Alias │ Reference │ Secret value ─────────────┼─────────────────┼───────────── db.password │ CS:db.password │ sec4et db2.password | CS:db2.password | t0pSec4et",
"karaf@root()> credential-store:list --help",
"karaf@root()> credential-store:remove db.password Alias │ Reference │ Secret value ─────────────┼─────────────────┼───────────── db2.password | CS:db2.password | t0pSec4et",
"When uncommented, configuration properties handled by Configuration Admin service will be encrypted when storing in etc/ and in bundle data. Values of the properties will actually be aliases to credential store entries. Please consult the documentation for more details. felix.cm.pm = elytron",
"karaf@root()> credential-store:list --show-secrets Alias │ Reference │ Secret value ────────────┼────────────────┼───────────── db.password │ CS:db.password │ sec4et http.port │ CS:http.port │ 8182",
"karaf@root()> config:property-list --pid org.ops4j.pax.web javax.servlet.context.tempdir = /data/servers/fuse-karaf-7.4.0.fuse-740060/data/pax-web-jsp org.ops4j.pax.web.config.file = /data/servers/fuse-karaf-7.4.0.fuse-740060/etc/undertow.xml org.ops4j.pax.web.session.cookie.httpOnly = true org.osgi.service.http.port = 8181 karaf@root()> config:property-set --pid org.ops4j.pax.web org.osgi.service.http.port CS:http.port karaf@root()> config:property-list --pid org.ops4j.pax.web javax.servlet.context.tempdir = /data/servers/fuse-karaf-7.4.0.fuse-740060/data/pax-web-jsp org.ops4j.pax.web.config.file = /data/servers/fuse-karaf-7.4.0.fuse-740060/etc/undertow.xml org.ops4j.pax.web.session.cookie.httpOnly = true org.osgi.service.http.port = CS:http.port",
"2019-03-12 15:36:25,648 INFO {paxweb-config-2-thread-1} (ServerControllerImpl.java:458) : Starting undertow http listener on 0.0.0.0:8182",
"org.osgi.service.http.port = CS:http.port org.ops4j.pax.web.config.file = USD{karaf.etc}/undertow.xml org.ops4j.pax.web.session.cookie.httpOnly = true javax.servlet.context.tempdir = USD{karaf.data}/pax-web-jsp"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/esbsecurecontainer |
2.2.3.5. Use Kerberos Authentication | 2.2.3.5. Use Kerberos Authentication One of the issues to consider when NIS is used for authentication is that whenever a user logs into a machine, a password hash from the /etc/shadow map is sent over the network. If an intruder gains access to a NIS domain and sniffs network traffic, they can collect user names and password hashes. With enough time, a password cracking program can guess weak passwords, and an attacker can gain access to a valid account on the network. Since Kerberos uses secret-key cryptography, no password hashes are ever sent over the network, making the system far more secure. Refer to Managing Single Sign-On and Smart Cards for more information about Kerberos. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_nis-use_kerberos_authentication |
C.4. Example User Interface Plug-in Deployment | C.4. Example User Interface Plug-in Deployment Follow these instructions to create a user interface plug-in that runs a Hello World! program when you sign in to the Red Hat Virtualization Manager Administration Portal. Deploying a Hello World! Plug-in Create a plug-in descriptor by creating the following file in the Manager at /usr/share/ovirt-engine/ui-plugins/helloWorld.json : { "name": "HelloWorld", "url": "/ovirt-engine/webadmin/plugin/HelloWorld/start.html", "resourcePath": "hello-files" } Create the plug-in host page by creating the following file in the Manager at /usr/share/ovirt-engine/ui-plugins/hello-files/start.html : <!DOCTYPE html><html><head> <script> var api = parent.pluginApi('HelloWorld'); api.register({ UiInit: function() { window.alert('Hello world'); } }); api.ready(); </script> </head><body></body></html> If you have successfully implemented the Hello World! plug-in, you will see this screen when you sign in to the Administration Portal: Figure C.1. A Successful Implementation of the Hello World! Plug-in | [
"{ \"name\": \"HelloWorld\", \"url\": \"/ovirt-engine/webadmin/plugin/HelloWorld/start.html\", \"resourcePath\": \"hello-files\" }",
"<!DOCTYPE html><html><head> <script> var api = parent.pluginApi('HelloWorld'); api.register({ UiInit: function() { window.alert('Hello world'); } }); api.ready(); </script> </head><body></body></html>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/example_rhev_ui_plugin_deployment |
Chapter 12. Manage secure signatures with sigstore | Chapter 12. Manage secure signatures with sigstore You can use the sigstore project with OpenShift Container Platform to improve supply chain security. 12.1. About the sigstore project The sigstore project enables developers to sign-off on what they build and administrators to verify signatures and monitor workflows at scale. With the sigstore project, signatures can be stored in the same registry as the build images. A second server is not needed. The identity piece of a signature is tied to the OpenID Connect (OIDC) identity through the Fulcio certificate authority, which simplifies the signature process by allowing key-less signing. Additionally, sigstore includes Rekor, which records signature metadata to an immutable, tamper-resistant ledger. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/nodes/nodes-sigstore-using |
Chapter 6. PriorityClass [scheduling.k8s.io/v1] | Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. value integer The value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityClass Table 6.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.3. Body parameters Parameter Type Description body DeleteOptions schema Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.8. Body parameters Parameter Type Description body PriorityClass schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses Table 6.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.17. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.19. Body parameters Parameter Type Description body Patch schema Table 6.20. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.22. Body parameters Parameter Type Description body PriorityClass schema Table 6.23. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.24. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1 |
Chapter 5. Remote health monitoring | Chapter 5. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 5.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 5.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/monitoring_openshift_data_foundation/remote_health_monitoring |
1.4. Processing | 1.4. Processing 1.4.1. Join Algorithms Nested loop does the most obvious processing - for every row in the outer source, it compares with every row in the inner source. Nested loop is only used when the join criteria has no equi-join predicates. Merge join first sorts the input sources on the joined columns. You can then walk through each side in parallel (effectively one pass through each sorted source) and when you have a match, emit a row. In general, merge join is on the order of n+m rather than n*m in nested loop. Merge join is the default algorithm. Using costing information the engine may also delay the decision to perform a full sort merge join. Based upon the actual row counts involved, the engine can choose to build an index of the smaller side (which will perform similarly to a hash join) or to only partially sort the larger side of the relation. Joins involving equi-join predicates are also eligible to be made into dependent joins (see Section 13.7.3, "Dependent Joins" ). 1.4.2. Sort-Based Algorithms Sorting is used as the basis of the Sort (ORDER BY), Grouping (GROUP BY), and DupRemoval (SELECT DISTINCT) operations. The sort algorithm is a multi-pass merge-sort that does not ever require all of the result set to be in memory, yet uses the maximal amount of memory allowed by the buffer manager. It consists of two phases. The first phase ("sort") will take an unsorted input stream and produce one or more sorted input streams. Each pass reads as much of the unsorted stream as possible, sorts it, and writes it back out as a new stream. Since the stream size may be bigger than that of the memory, it may be written out as many sorted streams. The second phase ("merge") consists of a set of phases that grab the batch from as many sorted input streams as will fit in memory. It then repeatedly grabs the tuple in sorted order from each stream and outputs merged sorted batches to a new sorted stream. At completion of the pass, all input streams are dropped. Hence, each pass reduces the number of sorted streams. The last stream remaining is the final output. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-processing |
Metadata APIs | Metadata APIs OpenShift Container Platform 4.17 Reference guide for metadata APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/metadata_apis/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/migrating_applications_to_red_hat_build_of_quarkus_3.8/making-open-source-more-inclusive |
Chapter 27. Configuring Routes | Chapter 27. Configuring Routes 27.1. Route configuration 27.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift Verification To verify that the route resource that you created, run the following command: USD oc get routes -o yaml <name of resource> 1 1 In this example, the route is named hello-openshift . Sample YAML definition of the created unsecured route: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 The host field is an alias DNS record that points to the service. This field can be any valid DNS name, such as www.example.com . The DNS name must follow DNS952 subdomain conventions. If not specified, a route name is automatically generated. 2 The targetPort field is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 27.1.2. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . 27.1.3. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 27.1.4. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 27.1.4.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1 includeSubDomains;preload" 1 In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours. Note In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command. Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. 27.1.4.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the following command: USD oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 27.1.4.3. Enforcing HTTP Strict Transport Security per-domain To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy. If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation. Note To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates. Note You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations. Important HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure Edit the Ingress config file: USD oc edit ingresses.config.openshift.io/cluster Example HSTS policy apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains 1 Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies. 2 7 Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns . 3 Optional. If you include namespaceSelector , it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated. 4 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced. The largestMaxAge value must be between 0 and 2147483647 . It can be left unspecified, which means no upper limit is enforced. The smallestMaxAge value must be between 0 and 2147483647 . Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced. 5 Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following: RequirePreload : preload is required by the RequiredHSTSPolicy . RequireNoPreload : preload is forbidden by the RequiredHSTSPolicy . NoOpinion : preload does not matter to the RequiredHSTSPolicy . 6 Optional. includeSubDomainsPolicy can be set with one of the following: RequireIncludeSubDomains : includeSubDomains is required by the RequiredHSTSPolicy . RequireNoIncludeSubDomains : includeSubDomains is forbidden by the RequiredHSTSPolicy . NoOpinion : includeSubDomains does not matter to the RequiredHSTSPolicy . You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command . To apply HSTS to all routes in the cluster, enter the oc annotate command . For example: USD oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" To apply HSTS to all routes in a particular namespace, enter the oc annotate command . For example: USD oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" Verification You can review the HSTS policy you configured. For example: To review the maxAge set for required HSTS policies, enter the following command: USD oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}' To review the HSTS annotations on all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains 27.1.5. Throughput issue troubleshooting methods Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services. If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf , to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes. For information on installing and using iperf , see this Red Hat Solution . In some cases, the cluster may mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action. If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod. Additional resources Latency spikes or temporary reduction in throughput to remote workers Ingress Controller configuration parameters 27.1.6. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 27.1.6.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 27.1.7. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. The following table shows example routes and their accessibility: Table 27.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 27.1.8. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 27.2. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified. Strict : the browser sends cookies only for same-site requests. None : the browser sends cookies for both cross-site and same-site requests. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/whitelists folder. Note To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 27.3. rewrite-target examples: Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar 27.1.9. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 27.1.10. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. 3 When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com , to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml 2 The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret. List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend 27.1.11. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 27.1.12. Creating a route using the destination CA certificate in the Ingress annotation The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate. Prerequisites You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Procedure Create a secret for the destination CA certificate by entering the following command: USD oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path> For example: USD oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt Example output secret/dest-ca-cert created Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1 ... 1 The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route. Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: ... tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- ... 27.1.13. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes. The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services. Prerequisites You deployed an OpenShift Container Platform cluster on bare metal. You installed the OpenShift CLI ( oc ). Procedure To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example: Sample service YAML file apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: "<resource_version_number>" selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {} 1 In a dual-stack instance, there are two different clusterIPs provided. 2 For a single-stack instance, enter IPv4 or IPv6 . For a dual-stack instance, enter both IPv4 and IPv6 . 3 For a single-stack instance, enter SingleStack . For a dual-stack instance, enter RequireDualStack . These resources generate corresponding endpoints . The Ingress Controller now watches endpointslices . To view endpoints , enter the following command: USD oc get endpoints To view endpointslices , enter the following command: USD oc get endpointslices Additional resources Specifying an alternative cluster domain using the appsDomain option 27.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 27.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 27.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 27.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. | [
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>",
"oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt",
"secret/dest-ca-cert created",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/configuring-routes |
Chapter 6. Using RHEL image builder to create system images from different releases | Chapter 6. Using RHEL image builder to create system images from different releases You can use RHEL image builder to create images of multiple RHEL minor releases that are different from the host, such as RHEL 8.8 and RHEL 8.7. For that, you can add source system repositories with the release distribution fields set and also, you can create blueprints with the correct release distribution fields set. Additionally, if you have existing blueprint or source system repositories in an old format, you can create new blueprints with the correct release distribution fields set. To list the supported release distribution, you can run the following command: The output shows you a list with supported release distribution names: Note Cross-distribution image building, such as building a CentOS image on RHEL is not supported. 6.1. Creating an image with a different distribution in the CLI To select the distribution you want to use when composing an image in the RHEL image builder CLI, you must set the distro field in the blueprint. For that, follow the steps: Procedure If you are creating a new blueprint Create a blueprint. For example: By setting the distro field to "rhel-88", you ensure that it always builds a RHEL 8.8 image, no matter which version is running on the host. Note If the distro field is blank, it uses the same distribution of the host. If you are updating an existing blueprint Save (export) the existing blueprint to a local text file: Edit the existing blueprint file with a text editor of your choice, setting the distro field with the distribution of your choice, for example: Save the file and close the editor. Push (import) the blueprint back into RHEL image builder: Start the image creation: Wait until the compose is finished. Check the status of the compose: After the compose finishes, it shows a FINISHED status value. Identify the compose in the list by its UUID. Download the resulting image file: Replace UUID with the UUID value shown in the steps. 6.2. Using system repositories with specific distributions You can specify a list of distribution strings that the system repository source uses when resolving dependencies and building images. For that, follow the step: Procedure Create a TOML file with the following structure, for example: For example: Additional resources Managing repositories | [
"composer-cli distros list",
"rhel-8 rhel-84 rhel-85 rhel-86 rhel-87 rhel-88 rhel-89",
"name = \" <blueprint_name> \" description = \" <image-description> \" version = \"0.0.1\" modules = [] groups = [] distro = \" <distro-version> \"",
"composer-cli blueprints save EXISTING-BLUEPRINT",
"name = \"blueprint_84\" description = \"A 8.8 base image\" version = \"0.0.1\" modules = [] groups = [] distro = \"rhel-88\"",
"composer-cli blueprints push EXISTING-BLUEPRINT .toml",
"composer-cli compose start BLUEPRINT-NAME IMAGE-TYPE",
"composer-cli compose status",
"composer-cli compose image UUID",
"check_gpg = true check_ssl = true distros = [\" <distro-version> \"] id = \" <image-id> \" name = \" <image-name >_\" system = false type = \" <image-type> \" url = \" \\http://local/repos/rhel- <distro-version>_/ <project-repo> /\"",
"check_gpg = true check_ssl = true distros = [\"rhel-84\"] id = \"rhel-84-local\" name = \"local packages for rhel-84\" system = false type = \"yum-baseurl\" url = \" \\http://local/repos/rhel-84/projectrepo/ \""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/using-image-builder-to-create-system-images-with-from-different-releases_composing-a-customized-rhel-system-image |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/block_storage_backup_guide/making-open-source-more-inclusive |
14.5.17. Using blockresize to Change the Size of a Domain Path | 14.5.17. Using blockresize to Change the Size of a Domain Path blockresize can be used to re-size a block device of a domain while the domain is running, using the absolute path of the block device which also corresponds to a unique target name ( <target dev="name"/> ) or source file ( <source file="name"/> ). This can be applied to one of the disk devices attached to domain (you can use the command domblklist to print a table showing the brief information of all block devices associated with a given domain). Note Live image re-sizing will always re-size the image, but may not immediately be picked up by guests. With recent guest kernels, the size of virtio-blk devices is automatically updated (older kernels require a guest reboot). With SCSI devices, it is required to manually trigger a re-scan in the guest with the command, echo > /sys/class/scsi_device/0:0:0:0/device/rescan . In addition, with IDE it is required to reboot the guest before it picks up the new size. Run the following command: blockresize [domain] [path size] where: Domain is the unique target name or source file of the domain whose size you want to change Path size is a scaled integer which defaults to KiB (blocks of 1024 bytes) if there is no suffix. You must use a suffix of "B" to for bytes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-using_blockresize_to_change_the_size_of_a_domain_path |
Chapter 6. New features in RHEL 8 | Chapter 6. New features in RHEL 8 This section documents the most notable changes in RPM packaging between Red Hat Enterprise Linux 7 and 8. 6.1. Support for Weak dependencies Weak dependencies are variants of the Requires directive. These variants are matched against virtual Provides: and package names using Epoch-Version-Release range comparisons. Weak dependencies have two strengths ( weak and hint ) and two directions ( forward and backward ), as summarized in the following table. Note The forward direction is analogous to Requires: . The backward has no analog in the dependency system. Table 6.1. Possible combinations of Weak dependencies' strengths and directions Strength/Direction Forward Backward Weak Recommends: Supplements: Hint Suggests: Enhances: The main advantages of the Weak dependencies policy are: It allows smaller minimal installations while keeping the default installation feature rich. Packages can specify preferences for specific providers while maintaining the flexibility of virtual provides. 6.1.1. Introduction to Weak dependencies By default, Weak dependencies are treated similarly to regular Requires: . Matching packages are included in the YUM transaction. If adding the package leads to an error, YUM by default ignores the dependency. Hence, users can exclude packages that would be added by Weak dependencies or remove them later. Conditions of use You can use Weak dependencies only if the package still functions without the dependency. Note It is acceptable to create packages with very limited functionality without adding any of its weak requirements. Use cases Use Weak dependencies especially where it is possible to minimize the installation for reasonable use cases, such as building virtual machines or containers that have a single purpose and do not require the full feature set of the package. Typical use cases for Weak dependencies are: Documentation Documentation viewers if missing them is handled gracefully Examples Plug-ins or add-ons Support for file formats Support for protocols 6.1.2. The Hints strength Hints are by default ignored by YUM . They can be used by GUI tools to offer add-on packages that are not installed by default but can be useful in combination with the installed packages. Do not use Hints for the requirements of the main use cases of a package. Include such requirements in the strong or Weak dependencies instead. Package Preference YUM uses Weak dependencies and Hints to decide which package to use if there is a choice between multiple equally valid packages. Packages that are pointed at by dependencies from installed or to be installed packages are preferred. Note, the normal rules of dependency resolution are not influenced by this feature. For example, Weak dependencies cannot enforce an older version of a package to be chosen. If there are multiple providers for a dependency, the requiring package can add a Suggests: to provide a hint to the dependency resolver about which option is preferred. Enhances: is only used when the main package and other providers agree that adding the hint to the required package is for some reason the cleaner solution. Example 6.1. Using Hints to prefer one package over another If you want to prefer the mariadb package over the community-mysql package use: 6.1.3. Forward and Backward dependencies Forward dependencies are, similarly to Requires , evaluated for packages that are being installed. The best of the matching packages are also installed. In general, prefer Forward dependencies . Add the dependency to the package when getting the other package added to the system. For Backward dependencies , the packages containing the dependency are installed if a matching package is installed as well. Backward dependencies are mainly designed for third party vendors who can attach their plug-ins, add-ons, or extensions to distribution or other third party packages. 6.2. Support for Boolean dependencies Starting with version 4.13, RPM is able to process boolean expressions in the following dependencies: Requires Recommends Suggests Supplements Enhances Conflicts The following sections describe boolean dependencies syntax , provides a list of boolean operators , and explains boolean dependencies nesting as well as boolean dependencies semantics . 6.2.1. Boolean dependencies syntax Boolean expressions are always enclosed with parenthesis. They are build out of normal dependencies: Name only or name Comparison Version description 6.2.2. Boolean operators RPM 4.13 introduced the following boolean operators: Table 6.2. Boolean operators introduced with RPM 4.13 Boolean operator Description Example use and Requires all operands to be fulfilled for the term to be true. Conflicts: (pkgA and pkgB) or Requires one of the operands to be fulfilled for the term to be true. Requires: (pkgA >= 3.2 or pkgB) if Requires the first operand to be fulfilled if the second is. (reverse implication) Recommends: (myPkg-langCZ if langsupportCZ) if else Same as the if operator, plus requires the third operand to be fulfilled if the second is not. Requires: myPkg-backend-mariaDB if mariaDB else sqlite RPM 4.14 introduced the following additional boolean operators: Table 6.3. Boolean operators introduced with RPM 4.14 Boolean operator Description Example use with Requires all operands to be fulfilled by the same package for the term to be true. Requires: (pkgA-foo with pkgA-bar) without Requires a single package that satisfies the first operand but not the second. (set subtraction) Requires: (pkgA-foo without pkgA-bar) unless Requires the first operand to be fulfilled if the second is not. (reverse negative implication) Conflicts: (myPkg-driverA unless driverB) unless else Same as the unless operator, plus requires the third operand to be fulfilled if the second is. Conflicts: (myPkg-backend-SDL1 unless myPkg-backend-SDL2 else SDL2) Important The if operator cannot be used in the same context with the or operator, and the unless operator cannot be used in the same context with and . 6.2.3. Nesting Operands themselves can be used as boolean expressions, as shown in the below examples. Note that in such case, operands also need to be surrounded by parenthesis. You can chain the and and or operators together repeating the same operator with only one set of surrounding parenthesis. Example 6.2. Example use of operands applied as boolean expressions 6.2.4. Semantics Using Boolean dependencies does not change the semantic of regular dependencies. If Boolean dependencies are used, checking for one match all names are checked and the boolean value of there being a match is then aggregated over the Boolean operators. Important For all dependencies with the exception of Conflicts: , the result has to be True to not prevent an install. For Conflicts: , the result has to be False to not prevent an install. Warning Provides are not dependencies and cannot contain boolean expressions. 6.2.5. Understanding the output of the if operator The if operator is also returning a boolean value, which is usually close to what the intuitive understanding is. However, the below examples show that in some cases intuitive understanding of if can be misleading. Example 6.3. Misleading outputs of the if operator This statement is true if pkgB is not installed. However, if this statement is used where the default result is false, things become complicated: This statement is a conflict unless pkgB is installed and pkgA is not: So you might rather want to use: The same is true if the if operator is nested in or terms: This also makes the whole term true, because the if term is true if pkgB is not installed. If pkgA only helps if pkgB is installed, use and instead: 6.3. Support for File triggers File triggers are a kind of RPM scriptlets , which are defined in a spec file of a package. Similar to Triggers , they are declared in one package but executed when another package that contains the matching files is installed or removed. A common use of File triggers is to update registries or caches. In such use case, the package containing or managing the registry or cache should contain also one or more File triggers . Including File triggers saves time compared to the situation when the package controls updating itself. 6.3.1. File triggers syntax File triggers have the following syntax: Where: file_trigger_tag defines a type of file trigger. Allowed types are: filetriggerin filetriggerun filetriggerpostun transfiletriggerin transfiletriggerun transfiletriggerpostun FILE_TRIGGER_OPTIONS have the same purpose as RPM scriptlets options, except for the -P option. The priority of a trigger is defined by a number. The bigger number, the sooner the file trigger script is executed. Triggers with priority greater than 100000 are executed before standard scriptlets, and the other triggers are executed after standard scriptlets. The default priority is set to 1000000. Every file trigger of each type must contain one or more path prefixes and scripts. 6.3.2. Examples of File triggers syntax The following example shows the File triggers syntax: This file trigger executes /usr/bin/ldconfig directly after the installation of a package that contains a file having a path starting with /usr/lib or /lib . The file trigger is executed just once even if the package includes multiple files with the path starting with /usr/lib or /lib . However, all file names starting with /usr/lib or /lib are passed to standard input of trigger script so that you can filter inside of your script as shown below: This file trigger executes /usr/bin/ldconfig for each package containing files starting with /usr/lib and containing foo at the same time. Note that the prefix-matched files include all types of files including regular files, directories, symlinks and others. 6.3.3. File triggers types File triggers have two main types: File triggers executed once per package File triggers executed once per transaction File triggers are further divided based on the time of execution as follows: Before or after installation or erasure of a package Before or after a transaction 6.3.3.1. Executed once per package File triggers File triggers executed once per package are: %filetriggerin %filetriggerun %filetriggerpostun %filetriggerin This file trigger is executed after installation of a package if this package contains one or more files that match the prefix of this trigger. It is also executed after installation of a package that contains this file trigger and there is one or more files matching the prefix of this file trigger in the rpmdb database. %filetriggerun This file trigger is executed before uninstallation of a package if this package contains one or more files that match the prefix of this trigger. It is also executed before uninstallation of a package that contains this file trigger and there is one or more files matching the prefix of this file trigger in rpmdb . %filetriggerpostun This file trigger is executed after uninstallation of a package if this package contains one or more files that match the prefix of this trigger. 6.3.3.2. Executed once per transaction File triggers File triggers executed once per transaction are: %transfiletriggerin %transfiletriggerun %transfiletriggerpostun %transfiletriggerin This file trigger is executed once after a transaction for all installed packages that contain one or more files that match the prefix of this trigger. It is also executed after a transaction if there was a package containing this file trigger in that transaction and there is one or more files matching the prefix of this trigger in rpmdb . %transfiletriggerun This file trigger is executed once before a transaction for all packages that meet the following conditions: The package will be uninstalled in this transaction The package contains one or more files that match the prefix of this trigger It is also executed before a transaction if there is a package containing this file trigger in that transaction and there is one or more files matching the prefix of this trigger in rpmdb . %transfiletriggerpostun This file trigger is executed once after a transaction for all uninstalled packages that contain one or more file that matches the prefix of this trigger. Note The list of triggering files is not available in this trigger type. Therefore, if you install or uninstall multiple packages that contain libraries, the ldconfig cache is updated at the end of the whole transaction. This significantly improves the performance compared to RHEL 7 where the cache was updated for each package separately. Also the scriptlets which called ldconfig in %post and %postun in spec file of every package are no longer needed. 6.3.4. Example use of File triggers in glibc The following example shows a real-world usage of File triggers within the glibc package. In RHEL 8, File triggers are implemented in glibc to call the ldconfig command at the end of an installation or uninstallation transaction. This is ensured by including the following scriptlets in the glibc's SPEC file: Therefore, if you install or uninstall multiple packages, the ldconfig cache is updated for all installed libraries after the whole transaction is finished. Consequently, it is no longer necessary to include the scriptlets calling ldconfig in RPM spec files of individual packages. This improves the performance compared to RHEL 7, where the cache was updated for each package separately. 6.4. Stricter SPEC parser The SPEC parser has now some changes incorporated. Hence, it can identify new issues that were previously ignored. 6.5. Support for files above 4 GB On Red Hat Enterprise Linux 8, RPM can use 64-bit variables and tags, which enables operating on files and packages bigger than 4 GB. 6.5.1. 64-bit RPM tags Several RPM tags exist in both 64-bit versions and 32-bit versions. Note that the 64-bit versions have the LONG string in front of their name. Table 6.4. RPM tags available in both 32-bit and 64-bit versions 32-bit variant tag name 62-bit variant tag name Tag description RPMTAG_SIGSIZE RPMTAG_LONGSIGSIZE Header and compressed payload size. RPMTAG_ARCHIVESIZE RPMTAG_LONGARCHIVESIZE Uncompressed payload size. RPMTAG_FILESIZES RPMTAG_LONGFILESIZES Array of file sizes. RPMTAG_SIZE RPMTAG_LONGSIZE Sum of all file sizes. 6.5.2. Using 64-bit tags on command line The LONG extensions are always enabled on the command line. If you previously used scripts containing the rpm -q --qf command, you can add long to the name of such tags: 6.6. Other features Other new features related to RPM packaging in Red Hat Enterprise Linux 8 are: Simplified signature checking output in non-verbose mode Support for the enforced payload verification Support for the enforcing signature checking mode Additions and deprecations in macros Additional resources See the following references to various topics related to RPMs, RPM packaging, and RPM building. Some of these are advanced and extend the introductory material included in this documentation. Red Hat Software Collections Overview - The Red Hat Software Collections offering provides continuously updated development tools in latest stable versions. Red Hat Software Collections - The Packaging Guide provides an explanation of Software Collections and details how to build and package them. Developers and system administrators with basic understanding of software packaging with RPM can use this Guide to get started with Software Collections. Mock - Mock provides a community-supported package building solution for various architectures and different Fedora or RHEL versions than has the build host. RPM Documentation - The official RPM documentation. Fedora Packaging Guidelines - The official packaging guidelines for Fedora, useful for all RPM-based distributions. | [
"Package A: Requires: mysql Package mariadb: Provides: mysql Package community-mysql: Provides: mysql",
"Suggests: mariadb to Package A.",
"Requires: (pkgA or pkgB or pkgC)",
"Requires: (pkgA or (pkgB and pkgC))",
"Supplements: (foo and (lang-support-cz or lang-support-all))",
"Requires: (pkgA with capB) or (pkgB without capA)",
"Supplements: ((driverA and driverA-tools) unless driverB)",
"Recommends: myPkg-langCZ and (font1-langCZ or font2-langCZ) if langsupportCZ",
"Requires: (pkgA if pkgB)",
"Conflicts: (pkgA if pkgB)",
"Conflicts: (pkgA and pkgB)",
"Requires: ((pkgA if pkgB) or pkgC or pkg)",
"Requires: ((pkgA and pkgB) or pkgC or pkg)",
"%file_trigger_tag [FILE_TRIGGER_OPTIONS] - PATHPREFIX... body_of_script",
"%filetriggerin - /lib, /lib64, /usr/lib, /usr/lib64 /usr/sbin/ldconfig",
"%filetriggerin - /lib, /lib64, /usr/lib, /usr/lib64 grep \"foo\" && /usr/sbin/ldconfig",
"%transfiletriggerin common -P 2000000 - /lib /usr/lib /lib64 /usr/lib64 /sbin/ldconfig %end %transfiletriggerpostun common -P 2000000 - /lib /usr/lib /lib64 /usr/lib64 /sbin/ldconfig %end",
"-qp --qf=\"[%{filenames} %{longfilesizes}\\n]\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/new-features-in-rhel-8_packaging-and-distributing-software |
Managing clusters | Managing clusters OpenShift Cluster Manager 1-latest Using Red Hat OpenShift Cluster Manager to work with your OpenShift clusters Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html/managing_clusters/index |
Chapter 10. Client registration service | Chapter 10. Client registration service In order for an application or service to utilize Red Hat build of Keycloak it has to register a client in Red Hat build of Keycloak. An admin can do this through the admin console (or admin REST endpoints), but clients can also register themselves through the Red Hat build of Keycloak client registration service. The Client Registration Service provides built-in support for Red Hat build of Keycloak Client Representations, OpenID Connect Client Meta Data and SAML Entity Descriptors. The Client Registration Service endpoint is /realms/<realm>/clients-registrations/<provider> . The built-in supported providers are: default - Red Hat build of Keycloak Client Representation (JSON) install - Red Hat build of Keycloak Adapter Configuration (JSON) openid-connect - OpenID Connect Client Metadata Description (JSON) saml2-entity-descriptor - SAML Entity Descriptor (XML) The following sections will describe how to use the different providers. 10.1. Authentication To invoke the Client Registration Services you usually need a token. The token can be a bearer token, an initial access token or a registration access token. There is an alternative to register new client without any token as well, but then you need to configure Client Registration Policies (see below). 10.1.1. Bearer token The bearer token can be issued on behalf of a user or a Service Account. The following permissions are required to invoke the endpoints (see Server Administration Guide for more details): create-client or manage-client - To create clients view-client or manage-client - To view clients manage-client - To update or delete client If you are using a bearer token to create clients it's recommend to use a token from a Service Account with only the create-client role (see Server Administration Guide for more details). 10.1.2. Initial Access Token The recommended approach to registering new clients is by using initial access tokens. An initial access token can only be used to create clients and has a configurable expiration as well as a configurable limit on how many clients can be created. An initial access token can be created through the admin console. To create a new initial access token first select the realm in the admin console, then click on Client in the menu on the left, followed by Initial access token in the tabs displayed in the page. You will now be able to see any existing initial access tokens. If you have access you can delete tokens that are no longer required. You can only retrieve the value of the token when you are creating it. To create a new token click on Create . You can now optionally add how long the token should be valid, also how many clients can be created using the token. After you click on Save the token value is displayed. It is important that you copy/paste this token now as you won't be able to retrieve it later. If you forget to copy/paste it, then delete the token and create another one. The token value is used as a standard bearer token when invoking the Client Registration Services, by adding it to the Authorization header in the request. For example: 10.1.3. Registration Access Token When you create a client through the Client Registration Service the response will include a registration access token. The registration access token provides access to retrieve the client configuration later, but also to update or delete the client. The registration access token is included with the request in the same way as a bearer token or initial access token. By default, registration access token rotation is enabled. This means a registration access token is only valid once. When the token is used, the response will include a new token. Note that registration access token rotation can be disabled by using Client Policies . If a client was created outside of the Client Registration Service it won't have a registration access token associated with it. You can create one through the admin console. This can also be useful if you lose the token for a particular client. To create a new token find the client in the admin console and click on Credentials . Then click on Generate registration access token . 10.2. Red Hat build of Keycloak Representations The default client registration provider can be used to create, retrieve, update and delete a client. It uses Red Hat build of Keycloak Client Representation format which provides support for configuring clients exactly as they can be configured through the admin console, including for example configuring protocol mappers. To create a client create a Client Representation (JSON) then perform an HTTP POST request to /realms/<realm>/clients-registrations/default . It will return a Client Representation that also includes the registration access token. You should save the registration access token somewhere if you want to retrieve the config, update or delete the client later. To retrieve the Client Representation perform an HTTP GET request to /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To update the Client Representation perform an HTTP PUT request with the updated Client Representation to: /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To delete the Client Representation perform an HTTP DELETE request to: /realms/<realm>/clients-registrations/default/<client id> 10.3. Red Hat build of Keycloak adapter configuration The installation client registration provider can be used to retrieve the adapter configuration for a client. In addition to token authentication you can also authenticate with client credentials using HTTP basic authentication. To do this include the following header in the request: To retrieve the Adapter Configuration then perform an HTTP GET request to /realms/<realm>/clients-registrations/install/<client id> . No authentication is required for public clients. This means that for the JavaScript adapter you can load the client configuration directly from Red Hat build of Keycloak using the above URL. 10.4. OpenID Connect Dynamic Client Registration Red Hat build of Keycloak implements OpenID Connect Dynamic Client Registration , which extends OAuth 2.0 Dynamic Client Registration Protocol and OAuth 2.0 Dynamic Client Registration Management Protocol . The endpoint to use these specifications to register clients in Red Hat build of Keycloak is /realms/<realm>/clients-registrations/openid-connect[/<client id>] . This endpoint can also be found in the OpenID Connect Discovery endpoint for the realm, /realms/<realm>/.well-known/openid-configuration . 10.5. SAML Entity Descriptors The SAML Entity Descriptor endpoint only supports using SAML v2 Entity Descriptors to create clients. It doesn't support retrieving, updating or deleting clients. For those operations the Red Hat build of Keycloak representation endpoints should be used. When creating a client a Red Hat build of Keycloak Client Representation is returned with details about the created client, including a registration access token. To create a client perform an HTTP POST request with the SAML Entity Descriptor to /realms/<realm>/clients-registrations/saml2-entity-descriptor . 10.6. Example using CURL The following example creates a client with the clientId myclient using CURL. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. curl -X POST \ -d '{ "clientId": "myclient" }' \ -H "Content-Type:application/json" \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ http://localhost:8080/realms/master/clients-registrations/default 10.7. Example using Java Client Registration API The Client Registration Java API makes it easy to use the Client Registration Service using Java. To use include the dependency org.keycloak:keycloak-client-registration-api:>VERSION< from Maven. For full instructions on using the Client Registration refer to the JavaDocs. Below is an example of creating a client. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. String token = "eyJhbGciOiJSUz..."; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url("http://localhost:8080", "myrealm") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken(); 10.8. Client Registration Policies Note The current plans are for the Client Registration Policies to be removed in favor of the Client Policies described in the Server Administration Guide . Client Policies are more flexible and support more use cases. Red Hat build of Keycloak currently supports two ways how new clients can be registered through Client Registration Service. Authenticated requests - Request to register new client must contain either Initial Access Token or Bearer Token as mentioned above. Anonymous requests - Request to register new client doesn't need to contain any token at all Anonymous client registration requests are very interesting and powerful feature, however you usually don't want that anyone is able to register new client without any limitations. Hence we have Client Registration Policy SPI , which provide a way to limit who can register new clients and under which conditions. In Red Hat build of Keycloak admin console, you can click to Client Registration tab and then Client Registration Policies sub-tab. Here you will see what policies are configured by default for anonymous requests and what policies are configured for authenticated requests. Note The anonymous requests (requests without any token) are allowed just for creating (registration) of new clients. So when you register new client through anonymous request, the response will contain Registration Access Token, which must be used for Read, Update or Delete request of particular client. However using this Registration Access Token from anonymous registration will be then subject to Anonymous Policy too! This means that for example request for update client also needs to come from Trusted Host if you have Trusted Hosts policy. Also for example it won't be allowed to disable Consent Required when updating client and when Consent Required policy is present etc. Currently we have these policy implementations: Trusted Hosts Policy - You can configure list of trusted hosts and trusted domains. Request to Client Registration Service can be sent just from those hosts or domains. Request sent from some untrusted IP will be rejected. URLs of newly registered client must also use just those trusted hosts or domains. For example it won't be allowed to set Redirect URI of client pointing to some untrusted host. By default, there is not any whitelisted host, so anonymous client registration is de-facto disabled. Consent Required Policy - Newly registered clients will have Consent Allowed switch enabled. So after successful authentication, user will always see consent screen when he needs to approve permissions (client scopes). It means that client won't have access to any personal info or permission of user unless user approves it. Protocol Mappers Policy - Allows to configure list of whitelisted protocol mapper implementations. New client can't be registered or updated if it contains some non-whitelisted protocol mapper. Note that this policy is used for authenticated requests as well, so even for authenticated request there are some limitations which protocol mappers can be used. Client Scope Policy - Allow to whitelist Client Scopes , which can be used with newly registered or updated clients. There are no whitelisted scopes by default; only the client scopes, which are defined as Realm Default Client Scopes are whitelisted by default. Full Scope Policy - Newly registered clients will have Full Scope Allowed switch disabled. This means they won't have any scoped realm roles or client roles of other clients. Max Clients Policy - Rejects registration if current number of clients in the realm is same or bigger than specified limit. It's 200 by default for anonymous registrations. Client Disabled Policy - Newly registered client will be disabled. This means that admin needs to manually approve and enable all newly registered clients. This policy is not used by default even for anonymous registration. | [
"Authorization: bearer eyJhbGciOiJSUz",
"Authorization: basic BASE64(client-id + ':' + client-secret)",
"curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default",
"String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/client-registration- |
5.2. XFS File System Performance Analysis with Performance Co-Pilot | 5.2. XFS File System Performance Analysis with Performance Co-Pilot This section describes PCP XFS performance metrics and how to use them. Once started, the Performance Metric Collector Daemon (PMCD) begins collecting performance data from the installed Performance Metric Domain Agents (PMDAs). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host. The XFS PMDA, which is part of the default PCP installation, is used to gather performance metric data of XFS file systems in PCP. For a list of system services and tools that are distributed with PCP, see Table A.1, "System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" and Table A.2, "Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" . 5.2.1. Installing XFS PMDA to Gather XFS Data with PCP The XFS PMDA ships as part of the pcp package and is enabled by default on installation. To install PCP, enter: To enable and start the PMDA service on the host machine after the pcp and pcp-gui packages are installed, use the following commands: To query the PCP environment to verify that the PMCD process is running on the host and that the XFS PMDA is listed as enabled in the configuration, enter: Installing XFS PMDA Manually If the XFS PMDA is not listed in PCP configuration readout, install the PMDA agent manually. The PMDA installation script prompts you to specify the PMDA role: collector, monitor, or both. The collector role allows the collection of performance metrics on the current system The monitor role allows the system to monitor local systems, remote systems, or both. The default option is both collector and monitor, which allows the XFS PMDA to operate correctly in most scenarios. To install XFS PMDA manually, change to the xfs directory: In the xfs directory, enter: 5.2.2. Configuring and Examining XFS Performance Metrics Examining Metrics with pminfo With PCP installed and the XFS PMDA enabled, instructions are available in Section 5.2.1, "Installing XFS PMDA to Gather XFS Data with PCP" , the easiest way to start looking at the performance metrics available for PCP and XFS is to use the pminfo tool, which displays information about available performance metrics. The command displays a list of all available metrics provided by the XFS PMDA. To display a list of all available metrics provided by the XFS PMDA: Use the following options to display information on selected metrics: -t metric Displays one-line help information describing the selected metric. -T metric Displays more verbose help text describing the selected metric. -f metric Displays the current reading of the performance value that corresponds to the metric. You can use the -t , -T , and -f options with a group of metrics or an individual metric. Most metric data is provided for each mounted XFS file system on the system at time of probing. There are different groups of XFS metrics , which are arranged so that each different group is a new leaf node from the root XFS metric, using a dot ( . ) as a separator. The leaf node semantics (dots) applies to all PCP metrics. For an overview of the types of metrics that are available in each of the groups, see Table A.3, "PCP Metric Groups for XFS" . Example 5.1. Using the pminfo Tool to Examine XFS Read and Write Metrics To display one-line help information describing the xfs.write_bytes metric: To display more verbose help text describing the xfs.read_bytes metric: To obtain the current reading of the performance value that corresponds to the xfs.read_bytes metric: Configuring Metrics with pmstore With PCP, you can modify the values of certain metrics, especially if the metric acts as a control variable, for example the xfs.control.reset metric. To modify a metric value, use the pmstore tool. Example 5.2. Using pmstore to Reset the xfs.control.reset Metric This example shows how to use pmstore with the xfs.control.reset metric to reset the recorded counter values for the XFS PMDA back to zero. 5.2.3. Examining XFS Metrics Available per File System Starting with Red Hat Enterprise Linux 7.3, PCP enables XFS PMDA to allow the reporting of certain XFS metrics per each of the mounted XFS file systems. This makes it easier to pinpoint specific mounted file system issues and evaluate performance. For an overview of the types of metrics available per file system in each of the groups, see Table A.4, "PCP Metric Groups for XFS per Device" . Example 5.3. Obtaining per-Device XFS Metrics with pminfo The pminfo command provides per-device XFS metrics that give instance values for each mounted XFS file system. 5.2.4. Logging Performance Data with pmlogger PCP allows you to log performance metric values that can be replayed later and used for a retrospective performance analysis. Use the pmlogger tool to create archived logs of selected metrics on the system. With pmlogger, you can specify which metrics are recorded on the system and how often. The default pmlogger configuration file is /var/lib/pcp/config/pmlogger/config.default . The configuration file specifies which metrics are logged by the primary logging instance. To log metric values on the local machine with pmlogger , start a primary logging instance: When pmlogger is enabled and a default configuration file is set, a pmlogger line is included in the PCP configuration: Modifying the pmlogger Configuration File with pmlogconf When the pmlogger service is running, PCP logs a default set of metrics on the host. You can use the pmlogconf utility to check the default configuration, and enable XFS logging groups as needed. Important XFS groups to enable include the XFS information , XFS data , and log I/O traffic groups. Follow pmlogconf prompts to enable or disable groups of related performance metrics, and to control the logging interval for each enabled group. Group selection is made by pressing y (yes) or n (no) in response to the prompt. To create or modify the generic PCP archive logger configuration file with pmlogconf, enter: Modifying the pmlogger Configuration File Manually You can edit the pmlogger configuration file manually and add specific metrics with given intervals to create a tailored logging configuration. Example 5.4. The pmlogger Configuration File with XFS Metrics The following example shows an extract of the pmlogger config.default file with some specific XFS metrics added. Replaying the PCP Log Archives After recording metric data, you can replay the PCP log archives on the system in the following ways: You can export the logs to text files and import them into spreadsheets by using PCP utilities such as pmdumptext , pmrep , or pmlogsummary . You can replay the data in the PCP Charts application and use graphs to visualize the retrospective data alongside live data of the system. See Section 5.2.5, "Visual Tracing with PCP Charts" . You can use the pmdumptext tool to view the log files. With pmdumptext, you can parse the selected PCP log archive and export the values into an ASCII table. The pmdumptext tool enables you to dump the entire archive log, or only select metric values from the log by specifying individual metrics on the command line. Example 5.5. Displaying a Specific XFS Metric Log Information For example, to show data on the xfs.perdev.log metric collected in an archive at a 5 second interval and display all headers: For more information, see the pmdumptext (1) manual page, which is available from the pcp-doc package. 5.2.5. Visual Tracing with PCP Charts To be able to use the graphical PCP Charts application, install the pcp-gui package: You can use the PCP Charts application to plot performance metric values into graphs. The PCP Charts application allows multiple charts to be displayed simultaneously. The metrics are sourced from one or more live hosts with alternative options to use metric data from PCP log archives as a source of historical data. To launch PCP Charts from the command line, use the pmchart command. After starting PCP Charts, the GUI appears: The PCP Charts application The pmtime server settings are located at the bottom. The start and pause button allows you to control: The interval in which PCP polls the metric data The date and time for the metrics of historical data Go to File New Chart to select metric from both the local machine and remote machines by specifying their host name or address. Then, select performance metrics from the remote hosts. Advanced configuration options include the ability to manually set the axis values for the chart, and to manually choose the color of the plots. There are multiple options to take images or record the views created in PCP Charts: Click File Export to save an image of the current view. Click Record Start to start a recording. Click Record Stop to stop the recording. After stopping the recording, the recorded metrics are archived to be viewed later. You can customize the PCP Charts interface to display the data from performance metrics in multiple ways, including: line plot bar graphs utilization graphs In PCP Charts, the main configuration file, known as the view , allows the metadata associated with one or more charts to be saved. This metadata describes all chart aspects, including the metrics used and the chart columns. You can create a custom view configuration, save it by clicking File Save View , and load the view configuration later. For more information about view configuration files and their syntax, see the pmchart (1) manual page. Example 5.6. Stacking Chart Graph in PCP Charts View Configuration The example PCP Charts view configuration file describes a stacking chart graph showing the total number of bytes read and written to the given XFS file system loop1 . | [
"yum install pcp",
"systemctl enable pmcd.service",
"systemctl start pmcd.service",
"pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 hardware: 2 cpus, 2 disks, 1 node, 2048MB RAM timezone: BST-1 services pmcd pmcd: Version 3.10.6-1, 7 agents pmda: root pmcd proc xfs linux mmv jbd2",
"cd /var/lib/pcp/pmdas/xfs/",
"xfs]# ./Install You will need to choose an appropriate configuration for install of the \"xfs\" Performance Metrics Domain Agent (PMDA). collector collect performance statistics on this system monitor allow this system to monitor local and/or remote systems both collector and monitor configuration for this system Please enter c(ollector) or m(onitor) or (both) [b] Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Waiting for pmcd to terminate Starting pmcd Check xfs metrics have appeared ... 149 metrics and 149 values",
"pminfo xfs",
"pminfo -t xfs.write_bytes xfs.write_bytes [number of bytes written in XFS file system write operations]",
"pminfo -T xfs.read_bytes xfs.read_bytes Help: This is the number of bytes read via read(2) system calls to files in XFS file systems. It can be used in conjunction with the read_calls count to calculate the average size of the read operations to file in XFS file systems.",
"pminfo -f xfs.read_bytes xfs.read_bytes value 4891346238",
"pminfo -f xfs.write xfs.write value 325262",
"pmstore xfs.control.reset 1 xfs.control.reset old value=0 new value=1",
"pminfo -f xfs.write xfs.write value 0",
"pminfo -f -t xfs.perdev.read xfs.perdev.write xfs.perdev.read [number of XFS file system read operations] inst [0 or \"loop1\"] value 0 inst [0 or \"loop2\"] value 0 xfs.perdev.write [number of XFS file system write operations] inst [0 or \"loop1\"] value 86 inst [0 or \"loop2\"] value 0",
"systemctl start pmlogger.service",
"systemctl enable pmlogger.service",
"pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan [...] pmlogger: primary logger:/var/log/pcp/pmlogger/workstation/20160820.10.15",
"pmlogconf -r /var/lib/pcp/config/pmlogger/config.default",
"It is safe to make additions from here on # log mandatory on every 5 seconds { xfs.write xfs.write_bytes xfs.read xfs.read_bytes } log mandatory on every 10 seconds { xfs.allocs xfs.block_map xfs.transactions xfs.log } [access] disallow * : all; allow localhost : enquire;",
"pmdumptext -t 5seconds -H -a 20170605 xfs.perdev.log.writes Time local::xfs.perdev.log.writes[\"/dev/mapper/fedora-home\"] local::xfs.perdev.log.writes[\"/dev/mapper/fedora-root\"] ? 0.000 0.000 none count / second count / second Mon Jun 5 12:28:45 ? ? Mon Jun 5 12:28:50 0.000 0.000 Mon Jun 5 12:28:55 0.200 0.200 Mon Jun 5 12:29:00 6.800 1.000",
"yum install pcp-gui",
"#kmchart version 1 chart title \"Filesystem Throughput /loop1\" style stacking antialiasing off plot legend \"Read rate\" metric xfs.read_bytes instance \"loop1\" plot legend \"Write rate\" metric xfs.write_bytes instance \"loop1\""
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sec-xfs-file-system-performance-analysis-with-performance-co-pilot |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.