title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
13.2.4. SSSD and System Services | 13.2.4. SSSD and System Services SSSD and its associated services are configured in the sssd.conf file. The [sssd] section also lists the services that are active and should be started when sssd starts within the services directive. SSSD can provide credentials caches for several system services: A Name Service Switch (NSS) provider service that answers name service requests from the sssd_nss module. This is configured in the [nss] section of the SSSD configuration. This is described in Section 13.2.5, "Configuring Services: NSS" . A PAM provider service that manages a PAM conversation through the sssd_pam module. This is configured in the [pam] section of the configuration. This is described in Section 13.2.6, "Configuring Services: PAM" . An SSH provider service that defines how SSSD manages the known_hosts file and other key-related configuration. Using SSSD with OpenSSH is described in Section 13.2.9, "Configuring Services: OpenSSH and Cached Keys" . An autofs provider service that connects to an LDAP server to retrieve configured mount locations. This is configured as part of an LDAP identity provider in a [domain/NAME] section in the configuration file. This is described in Section 13.2.7, "Configuring Services: autofs" . A sudo provider service that connects to an LDAP server to retrieve configured sudo policies. This is configured as part of an LDAP identity provider in a [domain/NAME] section in the configuration file. This is described in Section 13.2.8, "Configuring Services: sudo" . A PAC responder service that defines how SSSD works with Kerberos to manage Active Directory users and groups. This is specifically part of managing Active Directory identity providers with domains, as described in Section 13.2.13, "Creating Domains: Active Directory" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Configuring_Services |
Chapter 17. Uninstalling a cluster on AWS | Chapter 17. Uninstalling a cluster on AWS You can remove a cluster that you deployed to Amazon Web Services (AWS). 17.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with user-provisioned infrastructure clusters. There might be resources that the installation program did not create or that the installation program is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 17.2. Deleting AWS resources with the Cloud Credential Operator utility To clean up resources after uninstalling an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in manual mode with STS, you can use the CCO utility ( ccoctl ) to remove the AWS resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Install an OpenShift Container Platform cluster with the CCO in manual mode with STS. Procedure Delete the AWS resources that ccoctl created: USD ccoctl aws delete \ --name=<name> \ 1 --region=<aws_region> 2 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <aws_region> is the AWS region in which to delete cloud resources. Example output: 2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted Verification To verify that the resources are deleted, query AWS. For more information, refer to AWS documentation. 17.3. Deleting a cluster with a configured AWS Local Zone infrastructure After you install a cluster on Amazon Web Services (AWS) into an existing Virtual Private Cloud (VPC), and you set subnets for each Local Zone location, you can delete the cluster and any AWS resources associated with it. The example in the procedure assumes that you created a VPC and its subnets by using a CloudFormation template. Prerequisites You know the name of the CloudFormation stacks, <local_zone_stack_name> and <vpc_stack_name> , that were used during the creation of the network. You need the name of the stack to delete the cluster. You have access rights to the directory that contains the installation files that were created by the installation program. Your account includes a policy that provides you with permissions to delete the CloudFormation stack. Procedure Change to the directory that contains the stored installation program, and delete the cluster by using the destroy cluster command: USD ./openshift-install destroy cluster --dir <installation_directory> \ 1 --log-level=debug 2 1 For <installation_directory> , specify the directory that stored any files created by the installation program. 2 To view different log details, specify error , info , or warn instead of debug . Delete the CloudFormation stack for the Local Zone subnet: USD aws cloudformation delete-stack --stack-name <local_zone_stack_name> Delete the stack of resources that represent the VPC: USD aws cloudformation delete-stack --stack-name <vpc_stack_name> Verification Check that you removed the stack resources by issuing the following commands in the AWS CLI. The AWS CLI outputs that no template component exists. USD aws cloudformation describe-stacks --stack-name <local_zone_stack_name> USD aws cloudformation describe-stacks --stack-name <vpc_stack_name> Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. Opt into AWS Local Zones AWS Local Zones available locations AWS Local Zones features | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2",
"2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted",
"./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2",
"aws cloudformation delete-stack --stack-name <local_zone_stack_name>",
"aws cloudformation delete-stack --stack-name <vpc_stack_name>",
"aws cloudformation describe-stacks --stack-name <local_zone_stack_name>",
"aws cloudformation describe-stacks --stack-name <vpc_stack_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/uninstalling-cluster-aws |
Chapter 10. Distributed tracing | Chapter 10. Distributed tracing The client offers distributed tracing based on the Jaeger implementation of the OpenTracing standard. 10.1. Enabling distributed tracing Use the following steps to enable tracing in your application: Procedure Add the Jaeger client dependency to your POM file. <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version> USD{jaeger-version} </version> </dependency> USD{jaeger-version} must be 1.0.0 or later. Add the jms.tracing option to your connection URI. Set the value to opentracing . Example: A connection URI with tracing enabled Register the global tracer. Example: Global tracer registration import io.jaegertracing.Configuration; import io.opentracing.Tracer; import io.opentracing.util.GlobalTracer; public class Example { public static void main(String[] args) { Tracer tracer = Configuration.fromEnv(" <service-name> ").getTracer(); GlobalTracer.registerIfAbsent(tracer); // ... } } Configure your environment for tracing. Example: Tracing configuration USD export JAEGER_SAMPLER_TYPE=const USD export JAEGER_SAMPLER_PARAM=1 USD java -jar example.jar net.example.Example The configuration shown here is for demonstration purposes. For more information about Jaeger configuration, see Configuration via Environment and Jaeger Sampling . To view the traces your application captures, use the Jaeger Getting Started to run the Jaeger infrastructure and console. | [
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version> USD{jaeger-version} </version> </dependency>",
"amqps://example.net? jms.tracing=opentracing",
"import io.jaegertracing.Configuration; import io.opentracing.Tracer; import io.opentracing.util.GlobalTracer; public class Example { public static void main(String[] args) { Tracer tracer = Configuration.fromEnv(\" <service-name> \").getTracer(); GlobalTracer.registerIfAbsent(tracer); // } }",
"export JAEGER_SAMPLER_TYPE=const export JAEGER_SAMPLER_PARAM=1 java -jar example.jar net.example.Example"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/distributed_tracing |
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] | Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object Required roleRef 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 4.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 4.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 4.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 4.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/rolebindings GET : list or watch objects of kind RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings DELETE : delete collection of RoleBinding GET : list or watch objects of kind RoleBinding POST : create a RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} GET : watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/rbac.authorization.k8s.io/v1/rolebindings HTTP method GET Description list or watch objects of kind RoleBinding Table 4.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 4.2.2. /apis/rbac.authorization.k8s.io/v1/watch/rolebindings HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings HTTP method DELETE Description delete collection of RoleBinding Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RoleBinding Table 4.5. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body RoleBinding schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 4.2.4. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method DELETE Description delete a RoleBinding Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 4.13. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body RoleBinding schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty 4.2.6. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method GET Description watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/rbac_apis/rolebinding-rbac-authorization-k8s-io-v1 |
Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus | Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus CDI plays a central role in Quarkus and Camel Quarkus offers a first class support for it too. You may use @Inject , @ConfigProperty and similar annotations e.g. to inject beans and configuration values to your Camel RouteBuilder , for example: import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = "timer.period", defaultValue = "1000") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF("timer:foo?period=%s", period) .setBody(exchange -> "Incremented the counter: " + counter.increment()) .to("log:cdi-example?showExchangePattern=false&showBodyType=false"); } } 1 The @ApplicationScoped annotation is required for @Inject and @ConfigProperty to work in a RouteBuilder . Note that the @ApplicationScoped beans are managed by the CDI container and their life cycle is thus a bit more complex than the one of the plain RouteBuilder . In other words, using @ApplicationScoped in RouteBuilder comes with some boot time penalty and you should therefore only annotate your RouteBuilder with @ApplicationScoped when you really need it. 2 The value for the timer.period property is defined in src/main/resources/application.properties of the example project. Tip Refer to the Quarkus Dependency Injection guide for more details. 5.1. Accessing CamelContext To access CamelContext just inject it into your bean: import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } } 5.2. @EndpointInject and @Produce If you are used to @org.apache.camel.EndpointInject and @org.apache.camel.Produce from plain Camel or from Camel on SpringBoot, you can continue using them on Quarkus too. The following use cases are supported by org.apache.camel.quarkus:camel-quarkus-core : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject("direct:myDirect1") ProducerTemplate producerTemplate; @EndpointInject("direct:myDirect2") FluentProducerTemplate fluentProducerTemplate; @EndpointInject("direct:myDirect3") DirectEndpoint directEndpoint; @Produce("direct:myDirect4") ProducerTemplate produceProducer; @Produce("direct:myDirect5") FluentProducerTemplate produceProducerFluent; } You can use any other Camel producer endpoint URI instead of direct:myDirect* . Warning @EndpointInject and @Produce are not supported on setter methods - see #2579 The following use case is supported by org.apache.camel.quarkus:camel-quarkus-bean : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce("direct:myDirect6") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello("Kermit") } } 5.3. CDI and the Camel Bean component 5.3.1. Refer to a bean by name To refer to a bean in a route definition by name, just annotate the bean with @Named("myNamedBean") and @ApplicationScoped (or some other supported scope). The @RegisterForReflection annotation is important for the native mode. import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named("myNamedBean") @RegisterForReflection public class NamedBean { public String hello(String name) { return "Hello " + name + " from the NamedBean"; } } Then you can use the myNamedBean name in a route definition: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:named") .bean("myNamedBean", "hello"); /* ... which is an equivalent of the following: */ from("direct:named") .to("bean:myNamedBean?method=hello"); } } As an alternative to @Named , you may also use io.smallrye.common.annotation.Identifier to name and identify a bean. import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier("myBeanIdentifier") @RegisterForReflection public class MyBean { public String hello(String name) { return "Hello " + name + " from MyBean"; } } Then refer to the identifier value within the Camel route: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:start") .bean("myBeanIdentifier", "Camel"); } } Note We aim at supporting all use cases listed in Bean binding section of Camel documentation. Do not hesitate to file an issue if some bean binding scenario does not work for you. 5.3.2. @Consume Since Camel Quarkus 2.0.0, the camel-quarkus-bean artifact brings support for @org.apache.camel.Consume - see the Pojo consuming section of Camel documentation. Declaring a class like the following import org.apache.camel.Consume; public class Foo { @Consume("activemq:cheese") public void onCheese(String name) { ... } } will automatically create the following Camel route from("activemq:cheese").bean("foo1234", "onCheese") for you. Note that Camel Quarkus will implicitly add @jakarta.inject.Singleton and jakarta.inject.Named("foo1234") to the bean class, where 1234 is a hash code obtained from the fully qualified class name. If your bean has some CDI scope (such as @ApplicationScoped ) or @Named("someName") set already, those will be honored in the auto-created route. | [
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }",
"import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }",
"import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }",
"from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-cdi |
Chapter 9. Optimizing storage | Chapter 9. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 9.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 9.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. Important Currently, CNS is not supported in OpenShift Container Platform 4.10. 9.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 9.2. Recommended and configurable storage technology Storage type ROX 1 RWX 2 Registry Scaled registry Metrics 3 Logging Apps 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. Block Yes 4 No Configurable Not configurable Recommended Recommended Recommended File Yes 4 Yes Configurable Configurable Configurable 5 Configurable 6 Recommended Object Yes Yes Recommended Recommended Not configurable Not configurable Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 9.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 9.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 9.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 9.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 9.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 9.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 9.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 9.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 9.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 9.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/optimizing-storage |
3.4.6. Displaying Comprehensive User Information | 3.4.6. Displaying Comprehensive User Information When administering users and groups on your system, you need a good tool to monitor their configuration and activity on the system. Red Hat Enterprise Linux 6 provides you with the lslogins command-line utility, which gives you a comprehensive overview of users and groups, not only regarding user or group account configuration but also their activity on the system. The general syntax of lslogins is the following: lslogins [ OPTIONS ] where OPTIONS can be one or more available options and their related parameters. See the lslogins (1) manual page or the output of the lslogins --help command for the complete list of available options and their usage. The lslogins utility gives versatile information in a variety of formats based on the chosen options. The following examples introduce the most basic as well as some of the most useful combinations. Running the lslogins command without any options shows default information about all system and user accounts on the system. Specifically, their UID, user name, and GECOS information, as well as information about the user's last login to the system, and whether their password is locked or login by password disabled. Example 3.13. Displaying basic information about all accounts on the system To display detailed information about a single user, run the lslogins LOGIN command, where LOGIN is either a UID or a user name. The following example displays detailed information about John Doe 's account and his activity on the system: Example 3.14. Displaying detailed information about a single account If you use the --logins= LOGIN option, you can display information about a group of accounts that are specified as a list of UIDs or user names. Specifying the --output= COLUMNS option, where COLUMNS is a list of available output parameters, you can customize the output of the lslogins command. For example, the following command shows login activity of the users root, jsmith, jdoe, and esmith: Example 3.15. Displaying specific information about a group of users The lslogins utility also distinguishes between system and user accounts. To address system accounts in your query, use the --system-accs option. To address user accounts, use the --user-accs . For example, the following command displays information about supplementary groups and password expirations for all user accounts: Example 3.16. Displaying information about supplementary groups and password expiration for all user accounts The ability to format the output of lslogins commands according to the user's needs makes lslogins an ideal tool to use in scripts and for automatic processing. For example, the following command returns a single string that represents the time and date of the last login. This string can be passed as input to another utility for further processing. Example 3.17. Displaying a single piece of information without the heading | [
"~]# lslogins UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS 0 root 0 0 root 1 bin 0 1 bin 2 daemon 0 1 daemon 3 adm 0 1 adm 4 lp 0 1 lp 5 sync 0 1 sync 6 shutdown 0 1 Jul21/16:20 shutdown 7 halt 0 1 halt 8 mail 0 1 mail 10 uucp 0 1 uucp 11 operator 0 1 operator 12 games 0 1 games 13 gopher 0 1 gopher 14 ftp 0 1 FTP User 29 rpcuser 0 1 RPC Service User 32 rpc 0 1 Rpcbind Daemon 38 ntp 0 1 42 gdm 0 1 48 apache 0 1 Apache 68 haldaemon 0 1 HAL daemon 69 vcsa 0 1 virtual console memory owner 72 tcpdump 0 1 74 sshd 0 1 Privilege-separated SSH 81 dbus 0 1 System message bus 89 postfix 0 1 99 nobody 0 1 Nobody 113 usbmuxd 0 1 usbmuxd user 170 avahi-autoipd 0 1 Avahi IPv4LL Stack 173 abrt 0 1 497 pulse 0 1 PulseAudio System Daemon 498 saslauth 0 1 Saslauthd user 499 rtkit 0 1 RealtimeKit 500 jsmith 0 0 10:56:12 John Smith 501 jdoe 0 0 12:13:53 John Doe 502 esmith 0 0 12:59:05 Emily Smith 503 jeyre 0 0 12:22:14 Jane Eyre 65534 nfsnobody 0 1 Anonymous NFS User",
"~]# lslogins jdoe Username: jdoe UID: 501 Gecos field: John Doe Home directory: /home/jdoe Shell: /bin/bash No login: no Password is locked: no Password no required: no Login by password disabled: no Primary group: jdoe GID: 501 Supplementary groups: users Supplementary group IDs: 100 Last login: 12:13:53 Last terminal: pts/3 Last hostname: 192.168.100.1 Hushed: no Password expiration warn interval: 7 Password changed: Aug01/02:00 Maximal change time: 99999 Password expiration: Sep01/02:00 Selinux context: unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"~]# lslogins --logins=0,500,jdoe,esmith > --output=UID,USER,LAST-LOGIN,LAST-TTY,FAILED-LOGIN,FAILED-TTY UID USER LAST-LOGIN LAST-TTY FAILED-LOGIN FAILED-TTY 0 root 500 jsmith 10:56:12 pts/2 501 jdoe 12:13:53 pts/3 502 esmith 15:46:16 pts/3 15:46:09 ssh:notty",
"~]# lslogins --user-accs --supp-groups --acc-expiration UID USER GID GROUP SUPP-GIDS SUPP-GROUPS PWD-WARN PWD-MIN PWD-MAX PWD-CHANGE PWD-EXPIR 0 root 0 root 7 99999 Jul21/02:00 500 jsmith 500 jsmith 1000,100 staff,users 7 99999 Jul21/02:00 501 jdoe 501 jdoe 100 users 7 99999 Aug01/02:00 Sep01/02:00 502 esmith 502 esmith 100 users 7 99999 Aug01/02:00 503 jeyre 503 jeyre 1000,100 staff,users 7 99999 Jul28/02:00 Sep01/02:00 65534 nfsnobody 65534 nfsnobody Jul21/02:00",
"~]# lslogins --logins=jsmith --output=LAST-LOGIN --time-format=iso | tail -1 2014-08-06T10:56:12+0200"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-displaying_comprehensive_user_information |
Chapter 4. Configuring Red Hat High Availability clusters on AWS | Chapter 4. Configuring Red Hat High Availability clusters on AWS This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes. You have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS . This chapter includes prerequisite procedures for setting up your environment for AWS. Once you have set up your environment, you can create and configure EC2 instances. This chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on AWS. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing AWS network resource agents. This chapter refers to the Amazon documentation in a number of places. For many procedures, see the referenced Amazon documentation for more information. Prerequisites You need to install the AWS command line interface (CLI). For more information on installing AWS CLI, see Installing the AWS CLI . Enable your subscriptions in the Red Hat Cloud Access program . The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto AWS with full support from Red Hat. Additional resources Red Hat Cloud Access Reference Guide Red Hat in the Public Cloud Red Hat Enterprise Linux on Amazon EC2 - FAQs Setting up with Amazon EC2 Red Hat on Amazon Web Services Support Policies for RHEL High Availability Clusters 4.1. Creating the AWS Access Key and AWS Secret Access Key You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster. Complete the following steps to create these keys. Prerequisites Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information. Procedure Launch the AWS Console . Click on your AWS Account ID to display the drop-down menu and select My Security Credentials . Click Users . Select the user to open the Summary screen. Click the Security credentials tab. Click Create access key . Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device. 4.2. Installing the HA packages and agents Complete the following steps on all nodes to install the HA packages and agents. Procedure Enter the following command to remove the AWS Red Hat Update Infrastructure (RHUI) client. Because you are going to use a Red Hat Cloud Access subscription, you should not use AWS RHUI in addition to your subscription. Register the VM with Red Hat. Disable all repositories. Enable the RHEL 7 Server and RHEL 7 Server HA repositories. Update all packages. Reboot if the kernel is updated. Install pcs, pacemaker, fence agent, and resource agent. The user hacluster was created during the pcs and pacemaker installation in the step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes. Add the high availability service to the RHEL Firewall if firewalld.service is enabled. Start the pcs service and enable it to start on boot. Verification step Ensure the pcs service is running. 4.3. Creating a cluster Complete the following steps to create the cluster of nodes. Procedure On one of the nodes, enter the following command to authenticate the pcs user hacluster . Specify the name of each node in the cluster. Example: Create the cluster. Example: Verification steps Enable the cluster. Start the cluster. Example: 4.4. Creating a fencing device Complete the following steps to configure fencing. Procedure Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information. Example: Create a fence device. Use the pcmk_host_map command to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key you previously set up in Creating the AWS Access Key and AWS Secret Access Key . Example: Verification steps Test the fencing agent for one of the other nodes. Example: Check the status to verify that the node is fenced. Example: 4.5. Installing the AWS CLI on cluster nodes Previously, you installed the AWS CLI on your host system. You now need to install the AWS CLI on cluster nodes before you configure the network resource agents. Complete the following procedure on each cluster node. Prerequisites You must have created an AWS Access Key and AWS Secret Access Key. For more information, see Creating the AWS Access Key and AWS Secret Access Key . Procedure Perform the procedure Installing the AWS CLI . Enter the following command to verify that the AWS CLI is configured properly. The instance IDs and instance names should display. Example: 4.6. Installing network resource agents For HA operations to work, the cluster uses AWS networking resource agents to enable failover functionality. If a node does not respond to a heartbeat check in a set time, the node is fenced and operations fail over to an additional node in the cluster. Network resource agents need to be configured for this to work. Add the two resources to the same group to enforce order and colocation constraints. Create a secondary private IP resource and virtual IP resource Complete the following procedure to add a secondary private IP address and create a virtual IP. You can complete this procedure from any node in the cluster. Procedure Enter the following command to view the AWS Secondary Private IP Address resource agent (awsvip) description. This shows the options and default operations for this agent. Enter the following command to create the Secondary Private IP address using an unused private IP address in the VPC CIDR block. Example: Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Example: Verification step Enter the pcs status command to verify that the resources are running. Example: Create an elastic IP address An elastic IP address is a public IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node. Note that this is different from the virtual IP resource created earlier. The elastic IP address is used for public-facing Internet connections instead of subnet connections. Add the two resources to the same group that was previously created to enforce order and colocation constraints. Enter the following AWS CLI command to create an elastic IP address. Enter the following command to view the AWS Secondary Elastic IP Address resource agent (awseip) description. This shows the options and default operations for this agent. Create the Secondary Elastic IP address resource using the allocated IP address created in Step 1. Example: Verification step Enter the pcs status command to verify that the resource is running. Example: Test the elastic IP address Enter the following commands to verify the virtual IP (awsvip) and elastic IP (awseip) resources are working. Procedure Launch an SSH session from your local workstation to the elastic IP address previously created. Example: Verify that the host you connected to via SSH is the host associated with the elastic resource created. Additional resources High Availability Add-On Overview High Availability Add-On Administration High Availability Add-On Reference 4.7. Configuring shared block storage This section provides an optional procedure for configuring shared block storage for a Red Hat High Availability cluster with Amazon EBS Multi-Attach volumes. The procedure assumes three instances (a three-node cluster) with a 1TB shared disk. Procedure Create a shared block volume using the AWS command create-volume . For example, the following command creates a volume in the us-east-1a availability zone. Note You need the VolumeId in the step. For each instance in your cluster, attach a shared block volume using the AWS command attach-volume . Use your <instance_id> and <volume_id> . For example, the following command attaches a shared block volume vol-042a5652867304f09 to instance i-0eb803361c2c887f2 . Verification steps For each instance in your cluster, verify that the block device is available by using the SSH command with your instance <ip_address> . For example, the following command lists details including the host name and block device for the instance IP 198.51.100.3 . Use the ssh command to verify that each instance in your cluster uses the same shared disk. For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3 . After you have verified that the shared disk is attached to each instance, you can configure resilient storage for the cluster. For information on configuring resilient storage for a Red Hat High Availability cluster, see Configuring a GFS2 File System in a Cluster . For general information on GFS2 file systems, see Configuring and managing GFS2 file systems . | [
"sudo -i yum -y remove rh-amazon-rhui-client*",
"subscription-manager register --auto-attach",
"subscription-manager repos --disable=*",
"subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms",
"yum update -y",
"reboot",
"yum -y install pcs pacemaker fence-agents-aws resource-agents",
"passwd hacluster",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload",
"systemctl enable pcsd.service --now",
"systemctl is-active pcsd.service",
"pcs host auth _hostname1_ _hostname2_ _hostname3_",
"pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized",
"pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_",
"pcs cluster setup --name newcluster node01 node02 node03 ...omitted Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success",
"pcs cluster enable --all",
"pcs cluster start --all",
"pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster",
"echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)",
"echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6",
"pcs stonith create cluster_fence fence_aws access_key=access-key secret_key=_secret-access-key_ region=_region_ pcmk_host_map=\"rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3\"",
"pcs stonith create clusterfence fence_aws access_key=AKIAI*******6MRMJA secret_key=a75EYIG4RVL3h*******K7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map=\"ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4",
"pcs stonith fence _awsnodename_",
"pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced",
"watch pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58",
"pcs resource describe awsvip",
"pcs resource create privip awsvip secondary_private_ip=_Unused-IP-Address_ --group _group-name_",
"pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group",
"pcs resource create vip IPaddr2 ip=_secondary-private-IP_ --group _group-name_",
"root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122",
"pcs resource describe awseip",
"pcs resource create elastic awseip elastic_ip=_Elastic-IP-Address_allocation_id=_Elastic-IP-Association-ID_ --group networking-group",
"pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP",
"ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122",
"aws ec2 create-volume --availability-zone availability_zone --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled",
"aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled { \"AvailabilityZone\": \"us-east-1a\", \"CreateTime\": \"2020-08-27T19:16:42.000Z\", \"Encrypted\": false, \"Size\": 1024, \"SnapshotId\": \"\", \"State\": \"creating\", \"VolumeId\": \"vol-042a5652867304f09\", \"Iops\": 51200, \"Tags\": [ ], \"VolumeType\": \"io1\" }",
"aws ec2 attach-volume --device /dev/xvdd --instance-id instance_id --volume-id volume_id",
"aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09 { \"AttachTime\": \"2020-08-27T19:26:16.086Z\", \"Device\": \"/dev/xvdd\", \"InstanceId\": \"i-0eb803361c2c887f2\", \"State\": \"attaching\", \"VolumeId\": \"vol-042a5652867304f09\" }",
"ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"",
"ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea nvme2n1 259:1 0 1T 0 disk",
"ssh ip_address \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"",
"ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-a-red-hat-high-availability-cluster-on-aws_cloud-content |
Chapter 3. Repositories | Chapter 3. Repositories Red Hat Enterprise Linux 9 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For more information, see the Scope of Coverage Details document. Content in the AppStream repository includes additional user-space applications, runtime languages, and databases in support of the varied workloads and use cases. In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. Additional resources Package manifest | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/ref_repositories_considerations-in-adopting-rhel-9 |
function::cmdline_str | function::cmdline_str Name function::cmdline_str - Fetch all command line arguments from current process Synopsis Arguments None Description Returns all arguments from the current process delimited by spaces. Returns the empty string when the arguments cannot be retrieved. | [
"cmdline_str:string()"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cmdline-str |
Appendix C. Building cloud images for Red Hat Satellite | Appendix C. Building cloud images for Red Hat Satellite Use this section to build and register images to Red Hat Satellite. You can use a preconfigured Red Hat Enterprise Linux KVM guest QCOW2 image: Latest RHEL 9 KVM Guest Image Latest RHEL 8 KVM Guest Image These images contain cloud-init . To function properly, they must use ec2-compatible metadata services for provisioning an SSH key. Note For the KVM guest images: The root account in the image is disabled, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. If you want to create custom Red Hat Enterprise Linux images, see Composing a customized Red Hat Enterprise Linux 9 Image or Composing a customized Red Hat Enterprise Linux 8 Image . C.1. Creating custom Red Hat Enterprise Linux images Prerequisites Use a Linux host machine to create an image. In this example, we use a Red Hat Enterprise Linux 7 Workstation. Use virt-manager on your workstation to complete this procedure. If you create the image on a remote server, connect to the server from your workstation with virt-manager . A Red Hat Enterprise Linux 7 or 6 ISO file (see Red Hat Enterprise Linux 7.4 Binary DVD or Red Hat Enterprise Linux 6.9 Binary DVD ). For more information about installing a Red Hat Enterprise Linux Workstation, see the Red Hat Enterprise Linux 7 Installation Guide . Before you can create custom images, install the following packages: Install libvirt , qemu-kvm , and graphical tools: Install the following command line tools: Note In the following procedures, enter all commands with the [root@host]# prompt on the workstation that hosts the libvirt environment. C.2. Supported clients in registration Satellite supports the following operating systems and architectures for registration. Supported host operating systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9 and 8 Red Hat Enterprise Linux 7 and 6 with the ELS Add-On Supported host architectures The hosts can use the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures C.3. Configuring a host for registration Configure your host for registration to Satellite Server or Capsule Server. You can use a configuration management tool to configure multiple hosts at once. Prerequisites The host must be using a supported operating system. For more information, see Section C.2, "Supported clients in registration" . The system clock on your Satellite Server and any Capsule Servers must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. Procedure Enable and start a time-synchronization tool on your host. The host must be synchronized with the same NTP server as Satellite Server and any Capsule Servers. On Red Hat Enterprise Linux 7 and later: On Red Hat Enterprise Linux 6: Deploy the SSL CA file on your host so that the host can make a secured registration call. Find where Satellite stores the SSL CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Transfer the SSL CA file to your host securely, for example by using scp . Login to your host by using SSH. Copy the certificate to the truststore: Update the truststore: C.4. Registering a host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your Satellite account has the Register hosts role assigned or a role with equivalent permissions. You must have root privileges on the host that you want to register. You have configured the host for registration. For more information, see Section C.3, "Configuring a host for registration" . An activation key must be available for the host. For more information, see Managing Activation Keys in Managing content . Optional: If you want to register hosts to Red Hat Insights, you must synchronize the rhel-8-for-x86_64-baseos-rpms and rhel-8-for-x86_64-appstream-rpms repositories and make them available in the activation key that you use. This is required to install the insights-client package on hosts. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . This repository is required for the remote execution pull client, Puppet agent, Tracer, and other tools. If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Enter the details for how you want the registered hosts to be configured. On the General tab, in the Activation Keys field, enter one or more activation keys to assign to hosts. Click Generate to generate a curl command. Run the curl command as root on the host that you want to register. After registration completes, any Ansible roles assigned to a host group you specified when configuring the registration template will run on the host. The registration details that you can specify include the following: On the General tab, in the Capsule field, you can select the Capsule to register hosts through. A Capsule behind a load balancer takes precedence over a Capsule selected in the Satellite web UI as the content source of the host. On the General tab, you can select the Insecure option to make the first call insecure. During this first call, the host downloads the CA file from Satellite. The host will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. On the Advanced tab, in the Repositories field, you can list repositories to be added before the registration is performed. You do not have to specify repositories if you provide them in an activation key. On the Advanced tab, in the Token lifetime (hours) field, you can change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. CLI procedure Use the hammer host-registration generate-command to generate the curl command to register the host. On the host that you want to register, run the curl command as root . For more information, see the Hammer CLI help with hammer host-registration generate-command --help . Ansible procedure Use the redhat.satellite.registration_command module. For more information, see the Ansible module documentation with ansible-doc redhat.satellite.registration_command . API procedure Use the POST /api/registration_commands resource. For more information, see the full API reference at https://satellite.example.com/apidoc/v2.html . C.5. Installing and configuring Puppet agent manually You can install and configure the Puppet agent on a host manually. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations by using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations by using Puppet integration . The host must have a Puppet environment assigned to it. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: C.6. Completing the Red Hat Enterprise Linux 7 image Procedure Update the system: Install the cloud-init packages: Open the /etc/cloud/cloud.cfg configuration file: Under the heading cloud_init_modules , add: The resolv-conf option automatically configures the resolv.conf when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain and other options. Open the /etc/sysconfig/network file: Add the following line to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, connect to the terminal as the root user and navigate to the /var/lib/libvirt/images/ directory: Reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel7-cloud.qcow2 file in the location where you enter the command. C.7. Completing the Red Hat Enterprise Linux 6 image Procedure Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and under cloud_init_modules add: The resolv-conf option automatically configures the resolv.conf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create the /etc/udev/rules.d/75-persistent-net-generator.rules file as follows: This prevents /etc/udev/rules.d/70-persistent-net.rules file from being created. If /etc/udev/rules.d/70-persistent-net.rules is created, networking might not function properly when booting from snapshots (the network interface is created as "eth1" rather than "eth0" and IP address is not assigned). Add the following line to /etc/sysconfig/network to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, log in as root and reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel6-cloud.qcow2 file in the location where you enter the command. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. C.8. steps Repeat the procedures for every image that you want to provision with Satellite. Move the image to the location where you want to store for future use. | [
"yum install virt-manager virt-viewer libvirt qemu-kvm",
"yum install virt-install libguestfs-tools-c",
"systemctl enable --now chronyd",
"chkconfig --add ntpd chkconfig ntpd on service ntpd start",
"cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors",
"update-ca-trust",
"dnf install puppet-agent",
"yum install puppet-agent",
". /etc/profile.d/puppet-agent.sh",
"puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent",
"puppet resource service puppet ensure=running enable=true",
"puppet ssl bootstrap",
"puppet ssl bootstrap",
"yum update",
"yum install cloud-utils-growpart cloud-init",
"vi /etc/cloud/cloud.cfg",
"- resolv-conf",
"vi /etc/sysconfig/network",
"NOZEROCONF=yes",
"subscription-manager repos --disable=* subscription-manager unregister",
"poweroff",
"cd /var/lib/libvirt/images/",
"virt-sysprep -d rhel7",
"virt-sparsify --compress rhel7.qcow2 rhel7-cloud.qcow2",
"yum update",
"yum install cloud-utils-growpart cloud-init",
"- resolv-conf",
"echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules",
"NOZEROCONF=yes",
"subscription-manager repos --disable=* subscription-manager unregister yum clean all",
"poweroff",
"virt-sysprep -d rhel6",
"virt-sparsify --compress rhel6.qcow2 rhel6-cloud.qcow2"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/Building_Cloud_Images_provisioning |
10.5.61. ProxyRequests | 10.5.61. ProxyRequests To configure the Apache HTTP Server to function as a proxy server, remove the hash mark ( # ) from the beginning of the <IfModule mod_proxy.c> line, the ProxyRequests, and each line in the <Proxy> stanza. Set the ProxyRequests directive to On , and set which domains are allowed access to the server in the Allow from directive of the <Proxy> stanza. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-proxyrequests |
Chapter 56. JSON Gson | Chapter 56. JSON Gson Gson is a Data Format which uses the Gson Library . from("activemq:My.Queue"). marshal().json(JsonLibrary.Gson). to("mqseries:Another.Queue"); 56.1. Dependencies When using json-gson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-gson-starter</artifactId> </dependency> 56.2. Gson Options The JSON Gson dataformat supports 3 options, which are listed below. Name Default Java Type Description prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. 56.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.dataformat.json-gson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.json-gson.enabled Whether to enable auto configuration of the json-gson data format. This is enabled by default. Boolean camel.dataformat.json-gson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-gson.unmarshal-type Class name of the java type to use when unmarshalling. String | [
"from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Gson). to(\"mqseries:Another.Queue\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-gson-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-json-gson-dataformat-starter |
Chapter 17. command | Chapter 17. command This chapter describes the commands under the command command. 17.1. command list List recognized commands by group Usage: Table 17.1. Optional Arguments Value Summary -h, --help Show this help message and exit --group <group-keyword> Show commands filtered by a command group, for example: identity, volume, compute, image, network and other keywords Table 17.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 17.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 17.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 17.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack command list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--group <group-keyword>]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/command |
Chapter 16. Azure Storage Blob Source | Chapter 16. Azure Storage Blob Source Consume Files from Azure Storage Blob. Important The Azure Storage Blob Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 16.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-blob-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Blob access Key. string accountName * Account Name The Azure Storage Blob account name. string containerName * Container Name The Azure Storage Blob container name. string period * Period Between Polls The interval between fetches to the Azure Storage Container in milliseconds integer 10000 credentialType Credential Type Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY string "SHARED_ACCOUNT_KEY" Note Fields marked with an asterisk (*) are mandatory. 16.2. Dependencies At runtime, the azure-storage-blob-source Kamelet relies upon the presence of the following dependencies: camel:azure-storage-blob camel:jsonpath camel:core camel:timer camel:kamelet 16.3. Usage This section describes how you can use the azure-storage-blob-source . 16.3.1. Knative Source You can use the azure-storage-blob-source Kamelet as a Knative source by binding it to a Knative object. azure-storage-blob-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-source properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 16.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 16.3.1.2. Procedure for using the cluster CLI Save the azure-storage-blob-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f azure-storage-blob-source-binding.yaml 16.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind azure-storage-blob-source -p "source.accessKey=The Access Key" -p "source.accountName=The Account Name" -p "source.containerName=The Container Name" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 16.3.2. Kafka Source You can use the azure-storage-blob-source Kamelet as a Kafka source by binding it to a Kafka topic. azure-storage-blob-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-source properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 16.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 16.3.2.2. Procedure for using the cluster CLI Save the azure-storage-blob-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f azure-storage-blob-source-binding.yaml 16.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind azure-storage-blob-source -p "source.accessKey=The Access Key" -p "source.accountName=The Account Name" -p "source.containerName=The Container Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 16.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-blob-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-source properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f azure-storage-blob-source-binding.yaml",
"kamel bind azure-storage-blob-source -p \"source.accessKey=The Access Key\" -p \"source.accountName=The Account Name\" -p \"source.containerName=The Container Name\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-source properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f azure-storage-blob-source-binding.yaml",
"kamel bind azure-storage-blob-source -p \"source.accessKey=The Access Key\" -p \"source.accountName=The Account Name\" -p \"source.containerName=The Container Name\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/azure-storage-blob-source |
5.6. Configuring PPP (Point-to-Point) Settings | 5.6. Configuring PPP (Point-to-Point) Settings Authentication Methods In most cases, the provider's PPP servers supports all the allowed authentication methods. If a connection fails, the user should disable support for some methods, depending on the PPP server configuration. Use point-to-point encryption (MPPE) Microsoft Point-To-Point Encryption protocol ( RFC 3078 ). Allow BSD data compression PPP BSD Compression Protocol ( RFC 1977 ). Allow Deflate data compression PPP Deflate Protocol ( RFC 1979 ). Use TCP header compression Compressing TCP/IP Headers for Low-Speed Serial Links ( RFC 1144 ). Send PPP echo packets LCP Echo-Request and Echo-Reply Codes for loopback tests ( RFC 1661 ). Note Since the PPP support in NetworkManager is optional, to configure PPP settings, make sure that the NetworkManager-ppp package is already installed. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ppp_point-to-point_settings |
Chapter 13. Red Hat Quay quota management and enforcement overview | Chapter 13. Red Hat Quay quota management and enforcement overview With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment: Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization. Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit. Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget. 13.1. Quota management architecture With the quota management feature enabled, individual blob sizes are summed at the repository and namespace level. For example, if two tags in the same repository reference the same blob, the size of that blob is only counted once towards the repository total. Additionally, manifest list totals are counted toward the repository total. Important Because manifest list totals are counted toward the repository total, the total quota consumed when upgrading from a version of Red Hat Quay might be reportedly differently in Red Hat Quay 3.9. In some cases, the new total might go over a repository's previously-set limit. Red Hat Quay administrators might have to adjust the allotted quota of a repository to account for these changes. The quota management feature works by calculating the size of existing repositories and namespace with a backfill worker, and then adding or subtracting from the total for every image that is pushed or garbage collected afterwords. Additionally, the subtraction from the total happens when the manifest is garbage collected. Note Because subtraction occurs from the total when the manifest is garbage collected, there is a delay in the size calculation until it is able to be garbage collected. For more information about garbage collection, see Red Hat Quay garbage collection . The following database tables hold the quota repository size, quota namespace size, and quota registry size, in bytes, of a Red Hat Quay repository within an organization: QuotaRepositorySize QuotaNameSpaceSize QuotaRegistrySize The organization size is calculated by the backfill worker to ensure that it is not duplicated. When an image push is initialized, the user's organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, users are notified. For a hard check, the push is stopped. If storage consumption is within configured quota limits, the push is allowed to proceed. Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the QuotaRepositorySize , QuotaNameSpaceSize , and QuotaRegistrySize tables. 13.2. Quota management limitations Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push's critical path. Without this, usage data might drift. The maximum storage quota size is dependent on the selected database: Table 13.1. Worker count environment variables Variable Description Postgres 8388608 TB MySQL 8388608 TB SQL Server 16777216 TB 13.3. Quota management configuration fields Table 13.2. Quota management configuration Field Type Description FEATURE_QUOTA_MANAGEMENT Boolean Enables configuration, caching, and validation for quota management feature. DEFAULT_SYSTEM_REJECT_QUOTA_BYTES String Enables system default quota reject byte allowance for all organizations. By default, no limit is set. QUOTA_BACKFILL Boolean Enables the quota backfill worker to calculate the size of pre-existing blobs. Default : True QUOTA_TOTAL_DELAY_SECONDS String The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete. Default : 1800 PERMANENTLY_DELETE_TAGS Boolean Enables functionality related to the removal of tags from the time machine window. Default : False RESET_CHILD_MANIFEST_EXPIRATION Boolean Resets the expirations of temporary tags targeting the child manifests. With this feature set to True , child manifests are immediately garbage collected. Default : False 13.3.1. Example quota management configuration The following YAML is the suggested configuration when enabling quota management. Quota management YAML configuration FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true 13.4. Establishing quota for an organization with the Red Hat Quay API When an organization is first created, it does not have an established quota. You can use the API to check, create, change, or delete quota limitations for an organization. Prerequisites You have generated an OAuth access token. Procedure To set a quota for an organization, you can use the POST /api/v1/organization/{orgname}/quota endpoint: USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": 10737418240, "limits": "10 Gi" }' Example output "Created" Use the GET /api/v1/organization/{orgname}/quota command to see if your organization already has an established quota: USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Example output [{"id": 1, "limit_bytes": 10737418240, "limit": "10.0 GiB", "default_config": false, "limits": [], "default_config_exists": false}] You can use the PUT /api/v1/organization/{orgname}/quota/{quota_id} command to modify the existing quota limitation. For example: USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <limit_in_bytes> }' Example output {"id": 1, "limit_bytes": 21474836480, "limit": "20.0 GiB", "default_config": false, "limits": [], "default_config_exists": false} 13.4.1. Pushing images To see the storage consumed, push various images to the organization. 13.4.1.1. Pushing ubuntu:18.04 Push ubuntu:18.04 to the organization from the command line: Sample commands USD podman pull ubuntu:18.04 USD podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 13.4.1.2. Using the API to view quota usage To view the storage consumed, GET data from the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] } 13.4.1.3. Pushing another image Pull, tag, and push a second image, for example, nginx : Sample commands USD podman pull nginx USD podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' Sample output { "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] } To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint: Sample command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq Sample output { "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } } 13.4.2. Rejecting pushes using quota limits If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, or warning , users are notified. For a hard check, or reject , the push is terminated. 13.4.2.1. Setting reject and warning limits To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint: Sample reject limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit Sample warning limit command USD curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit 13.4.2.2. Viewing reject and warning limits To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint: View quota limits USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq Sample output for quota limits [ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ] 13.4.2.3. Pushing an image when the reject limit is exceeded In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the push should automatically be rejected. Push a sample image to the organization from the command line: Sample image push USD podman pull ubuntu:20.04 USD podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 USD podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 Sample output when quota exceeded Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace 13.4.2.4. Notifications for limits exceeded When limits are exceeded, a notification appears: Quota notifications | [
"**Default:** `False`",
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'",
"{\"id\": 1, \"limit_bytes\": 21474836480, \"limit\": \"20.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}",
"podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }",
"podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq",
"{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/red-hat-quay-quota-management-and-enforcement |
8.6. Managed Resources | 8.6. Managed Resources You can set a resource to unmanaged mode, which indicates that the resource is still in the configuration but Pacemaker does not manage the resource. The following command sets the indicated resources to unmanaged mode. The following command sets resources to managed mode, which is the default state. You can specify the name of a resource group with the pcs resource manage or pcs resource unmanage command. The command will act on all of the resources in the group, so that you can set all of the resources in a group to managed or unmanaged mode with a single command and then manage the contained resources individually. | [
"pcs resource unmanage resource1 [ resource2 ]",
"pcs resource manage resource1 [ resource2 ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-managedresource-HAAR |
Chapter 1. Customizing nodes | Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.15.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.15.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.15.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.15 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.15.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the mirrored boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.15.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. Important Support for Intel(R) VROC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.15.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.15.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws",
"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}",
"clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.15.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.15.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.15.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1",
"mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean",
"mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0",
"mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0",
"mdadm -A /dev/md/coreos /dev/md/imsm0",
"mdadm --detail --export /dev/md/imsm0",
"coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installation_configuration/installing-customizing |
Chapter 23. Multiple networks | Chapter 23. Multiple networks 23.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 23.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 23.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. vlan : Configure a vlan-based additional network to allow VLAN-based network isolation and connectivity for pods. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 23.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device VLAN IPVLAN MACVLAN OVN-Kubernetes 23.2.1. Approaches to managing an additional network You can manage the lifecycle of an additional network in OpenShift Container Platform by using one of two approaches: modifying the Cluster Network Operator (CNO) configuration or applying a YAML manifest. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. The two different approaches are summarized here: Modifying the Cluster Network Operator (CNO) configuration: Configuring additional networks through CNO is only possible for cluster administrators. The CNO automatically creates and manages the NetworkAttachmentDefinition object. By using this approach, you can define NetworkAttachmentDefinition objects at install time through configuration of the install-config . Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. Compared to modifying the CNO configuration, this approach gives you more granular control and flexibility when it comes to configuration. Note When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN Kubernetes, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface: USD openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id> 23.2.2. IP address assignment for additional networks For additional networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment. The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components: CNI Plugin : Responsible for integrating with the Kubernetes networking stack to request and release IP addresses. DHCP IPAM CNI Daemon : A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself. For networks requiring type: dhcp in their IPAM configuration, ensure the following: A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer's existing network infrastructure. The DHCP server is appropriately configured to serve IP addresses to the nodes. In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server. Note Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. A DHCP lease must be periodically renewed throughout the container's lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup. Additional resources Dynamic IP address (DHCP) assignment configuration Dynamic IP address assignment configuration with Whereabouts 23.2.3. Configuration for an additional network attachment An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. Important Do not store any sensitive information or a secret in the NetworkAttachmentDefinition object because this information is accessible by the project administration user. The configuration for the API is described in the following table: Table 23.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 23.2.3.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value then the default namespace is used. Important To prevent namespace issues for the OVN-Kubernetes network plugin, do not name your additional network attachment default , because this namespace is reserved for the default additional network attachment. 4 A CNI plugin configuration in JSON format. 23.2.3.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plugin configuration in JSON format. 23.2.4. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 23.2.4.1. Configuration for a bridge additional network The following object describes the configuration parameters for the bridge CNI plugin: Table 23.2. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 23.2.4.1.1. bridge configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 23.2.4.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 23.3. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 23.2.4.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 23.2.4.3. Configuration for an VLAN additional network The following object describes the configuration parameters for the VLAN CNI plugin: Table 23.4. VLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: vlan . master string The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. vlanId integer Set the id of the vlan. ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. dns integer Optional: DNS information to return, for example, a priority-ordered list of DNS nameservers. linkInContainer boolean Optional: Specifies if the master interface is in the container network namespace or the main network namespace. 23.2.4.3.1. vlan configuration example The following example configures an additional network named vlan-net : { "name": "vlan-net", "cniVersion": "0.3.1", "type": "vlan", "master": "eth0", "mtu": 1500, "vlanId": 5, "linkInContainer": false, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" }, "dns": { "nameservers": [ "10.1.1.1", "8.8.8.8" ] } } 23.2.4.4. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN CNI plugin: Table 23.5. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 23.2.4.4.1. ipvlan configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 23.2.4.5. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: Table 23.6. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu integer Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 23.2.4.5.1. MACVLAN configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "dhcp" } } 23.2.4.6. Configuration for an OVN-Kubernetes additional network The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD). Important Configuration for an OVN-Kubernetes additional network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CR. The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks. Note Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported. 23.2.4.6.1. OVN-Kubernetes network plugin JSON configuration table The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin: Table 23.7. OVN-Kubernetes network plugin JSON configuration table Field Type Description cniVersion string The CNI specification version. The required value is 0.3.1 . name string The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinitions that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinitions must also share the same network specific parameters such as topology , subnets , mtu , and excludeSubnets . type string The name of the CNI plugin to configure. The required value is ovn-k8s-cni-overlay . topology string The topological configuration for the network. The required value is layer2 . subnets string The subnet to use for the network across the cluster. For "topology":"layer2" deployments, IPv6 ( 2001:DBB::/64 ) and dual-stack ( 192.168.100.0/24,2001:DBB::/64 ) subnets are supported. mtu string The maximum transmission unit (MTU) to the specified value. The default value, 1300 , is automatically set by the kernel. netAttachDefName string The metadata namespace and name of the network attachment definition object where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition in namespace ns1 named l2-network , this should be set to ns1/l2-network . excludeSubnets string A comma-separated list of CIDRs and IPs. IPs are removed from the assignable IP pool, and are never passed to the pods. When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IPs for the pods. Port security only prevents MAC spoofing. 23.2.4.6.2. Configuration for a switched topology The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments. Note Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster. The following NetworkAttachmentDefinition custom resource definition (CRD) YAML describes the fields needed to configure a switched secondary network. { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.100.200.0/24", "mtu": 1300, "netAttachDefName": "ns1/l2-network", "excludeSubnets": "10.100.200.0/29" } 23.2.4.6.3. Configuring pods for additional networks You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation. The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 23.2.4.6.4. Configuring pods with a static IP address The following example provisions a pod with a static IP address. Note You can only specify the IP address for a pod's secondary network attachment for layer 2 attachments. Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "l2-network", 1 "mac": "02:03:04:05:06:07", 2 "interface": "myiface1", 3 "ips": [ "192.0.2.20/24" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 1 The name of the network. This value must be unique across all NetworkAttachmentDefinitions . 2 The MAC address to be assigned for the interface. 3 The name of the network interface to be created for the pod. 4 The IP addresses to be assigned to the network interface. 23.2.5. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 23.2.5.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 23.8. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 23.9. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 23.10. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 23.11. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 23.2.5.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 23.12. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 23.2.5.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 23.13. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 23.2.5.4. Creating a whereabouts-reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource (CR) for dynamic IP address assignment. The whereabouts-reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file. Use the following procedure to deploy the whereabouts-reconciler daemon set. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR): apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster # ... spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw # ... Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 23.2.5.5. Configuring the Whereabouts IP reconciler schedule The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them. Use this procedure to change the frequency at which the IP reconciler runs. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running. Procedure Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler: USD oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *" This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements. Note The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval: USD oc -n openshift-multus logs whereabouts-reconciler-2p7hw Example output 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success 23.2.6. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically. Important Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 23.2.7. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 23.3. About virtual routing and forwarding 23.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 23.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 23.4. Configuring multi-network policy As a cluster administrator, you can configure multi-network for additional networks. You can specify multi-network policy for SR-IOV and macvlan additional networks. Macvlan additional networks are fully supported. Other types of additional networks, such as ipvlan, are not supported. Important Support for configuring multi-network policies for SR-IOV additional networks is a Technology Preview feature and is only supported with kernel network interface cards (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Configured network policies are ignored in IPv6 networks. 23.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan or SR-IOV additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 23.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 23.4.3. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 23.4.3.1. Prerequisites You have enabled multi-network policy support for your cluster. 23.4.3.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: [] where: <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where: <network_name> Specifies the name of a network attachment definition. Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y where: <network_name> Specifies the name of a network attachment definition. Restrict traffic to a service This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore . In this example the application could be a REST API server, marked with labels app=bookstore and role=api . This example addresses the following use cases: Restricting the traffic to a service to only the other microservices that need to use it. Restricting the connections to a database to only permit the application using it. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore where: <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 23.4.3.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 23.4.3.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 23.4.3.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 23.4.3.6. Creating a default deny all multi-network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> 2 spec: podSelector: {} 3 ingress: [] 4 1 namespace: default deploys this policy to the default namespace. 2 network_name : specifies the name of a network attachment definition. 3 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 4 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created 23.4.3.7. Creating a multi-network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 23.4.3.8. Creating a multi-network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 23.4.3.9. Creating a multi-network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 23.4.4. Additional resources About network policy Understanding multiple networks Configuring a macvlan network Configuring an SR-IOV network device 23.5. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 23.5.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 23.5.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/network-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 23.6. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 23.6.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 23.7. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 23.7.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 23.8. Removing an additional network As a cluster administrator you can remove an additional network attachment. 23.8.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 23.9. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify. Using a secondary network with a VRF instance has the following advantages: Workload isolation Isolate workload traffic by configuring a VRF instance for the additional network. Improved security Enable improved security through isolated network paths in the VRF domain. Multi-tenancy support Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant. Note Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1 . To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. Additional resources About virtual routing and forwarding 23.9.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", 2 "vrfname": "vrf-1", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verification Create a pod and assign it to the additional network with the VRF instance: Create a YAML file that defines the Pod resource: Example pod-additional-net.yaml file apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1" 1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8 1 Specify the name of the additional network with the VRF instance. Create the Pod resource by running the following command: USD oc create -f pod-additional-net.yaml Example output pod/test-pod created Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- vrf-1 1001 Confirm that the VRF interface is the controller for the additional interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode | [
"openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }",
"bridge vlan add vid VLAN_ID dev DEV",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }",
"{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s",
"oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s",
"oc -n openshift-multus logs whereabouts-reconciler-2p7hw",
"2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success",
"oc create namespace <namespace_name>",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE test-network-1 14m",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }",
"oc apply -f <file>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true",
"oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml",
"network.operator.openshift.io/cluster patched",
"touch <policy_name>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore",
"oc apply -f <policy_name>.yaml -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"oc get multi-networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit multi-networkpolicy <policy_name> -n <namespace>",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc get multi-networkpolicy",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc delete multi-networkpolicy <policy_name> -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> 2 spec: podSelector: {} 3 ingress: [] 4",
"oc apply -f deny-by-default.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc edit pod <name>",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools",
"oc exec -it <pod_name> -- ip route",
"oc edit networks.operator.openshift.io cluster",
"name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }",
"oc edit pod <name>",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'",
"oc exec -it <pod_name> -- ip a",
"oc delete pod <name> -n <namespace>",
"oc edit networks.operator.openshift.io cluster",
"oc get network-attachment-definitions <network-name> -o yaml",
"oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1",
"oc get network-attachment-definition --all-namespaces",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'",
"oc create -f additional-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE additional-network-1 14m",
"apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8",
"oc create -f pod-additional-net.yaml",
"pod/test-pod created",
"ip vrf show",
"Name Table ----------------------- vrf-1 1001",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/multiple-networks |
Chapter 6. Job template examples and extensions | Chapter 6. Job template examples and extensions Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements. 6.1. Customizing job templates When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones. The following template combines default templates to install and start the nginx service on clients: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %> The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %> With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set , and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list. 6.2. Default job template categories Job template category Description Packages Templates for performing package related actions. Install, update, and remove actions are included by default. Puppet Templates for executing Puppet runs on target hosts. Power Templates for performing power related actions. Restart and shutdown actions are included by default. Commands Templates for executing custom commands on remote hosts. Services Templates for performing service related actions. Start, stop, restart, and status actions are included by default. Katello Templates for performing content related actions. These templates are used mainly from different parts of the Satellite web UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. 6.3. Example restorecon template This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts. Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor: restorecon -RvF <%= input("directory") %> The <%= input("directory") %> string is replaced by a user-defined directory during job invocation. On the Job tab, set Job category to Commands . Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor. Click Required so that the command cannot be executed without the user specified parameter. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon . Click Submit . For more information, see Executing a restorecon Template on Multiple Hosts in Managing hosts . 6.4. Rendering a restorecon template This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template . This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Run Command - restorecon", :directory => "/home") %> 6.5. Executing a restorecon template on multiple hosts This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory. Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select Commands as Job category and Run Command - restorecon as Job template and click . Select the hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. In the directory field, provide a directory, for example /home , and click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 4.22, "Advanced settings in the job wizard" . When you are done entering the advanced settings or if it is not required, click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that the job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. 6.6. Including power actions in templates This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents Satellite from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Power Action - SSH Default", :action => "restart") %> | [
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>",
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>",
"restorecon -RvF <%= input(\"directory\") %>",
"<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>",
"<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_ansible_integration/job_template_examples_and_extensions_ansible |
Chapter 11. Known issues | Chapter 11. Known issues This part describes known issues in Red Hat Enterprise Linux 9.4. 11.1. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installation program or use the authselect Kickstart command during installation. Bugzilla:1640697 [1] The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. Bugzilla:1697896 [1] Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system. Instead, run Anaconda in a temporary virtual machine to keep the SELinux policy unchanged on a production system. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. Bugzilla:2050140 Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool When booting the RHEL installation from a USB that is created using a third party tool, the installation program fails to detect the Local Media installation source (only Red Hat CDN is detected). This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format. As a workaround, use either of the following solution: When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo= . To create a bootable USB device on Windows, use Fedora Media Writer. When using a third party tool such as Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device. For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto-detected during the installation of RHEL 8.3 . Bugzilla:1877697 [1] The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. Jira:RHEL-4707 Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To work around this problem, add the following script in the Kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. Jira:RHEL-4711 Anaconda fails to verify existence of an administrator user account While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account. To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system. Bugzilla:2047713 New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10 PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL. The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand. To work around this problem, you can use another filesystem for /boot , for example ext4. Bugzilla:1997832 [1] RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installation program does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error: To work around this issue: Use an automatic partitioning scheme and do not add any mount points manually. Manually assign mount points only inside /var directory. For example, /var/ my-mount-point ), and the following standard directories: / , /boot , /var . As a result, the installation process finishes successfully. Jira:RHEL-4741 NetworkManager fails to start after the installation when connected to a network but without DHCP or a static IP address configured Starting with RHEL 9.0, Anaconda activates network devices automatically when there is no specific ip= or Kickstart network configuration set. Anaconda creates a default persistent configuration file for each Ethernet device. The connection profile has the ONBOOT and autoconnect value set to true . As a consequence, during the start of the installed system, RHEL activates the network devices, and the networkManager-wait-online service fails. As a workaround, do one of the following: Delete all connections using the nmcli utility except one connection you want to use. For example: List all connection profiles: Delete the connection profiles that you do not require: Replace <connection_name> with the name of the connection you want to delete. Disable the auto connect network feature in Anaconda if no specific ip= or Kickstart network configuration is set. In the Anaconda GUI, navigate to Network & Hostname . Select a network device to disable. Click Configure . On the General tab, clear the Connect automatically with priority checkbox. Click Save . Bugzilla:2115783 [1] Kickstart installations fail to configure the network connection Anaconda performs the Kickstart network configuration only through the NetworkManager API. Anaconda processes the network configuration after the %pre Kickstart section. As a consequence, some tasks from the Kickstart %pre section are blocked. For example, downloading packages from the %pre section fails due to unavailability of the network configuration. To work around this problem: Configure the network, for example using the nmcli tool, as a part of the %pre script. Use the installation program boot options to configure the network for the %pre script. As a result, it is possible to use the network for tasks in the %pre section and the Kickstart installation process completes. Bugzilla:2173992 Enabling the FIPS mode is not supported when building rpm-ostree images with RHEL image builder Currently, there is no support to enable the FIPS mode when building rpm-ostree images with RHEL image builder. Jira:RHEL-4655 Images built with the stig profile remediation fails to boot with FIPS error FIPS mode is not supported by RHEL image builder. When using RHEL image builder customized with the xccdf_org.ssgproject.content_profile_stig profile remediation, the system fails to boot with the following error: Enabling the FIPS policy manually after the system image installation with the fips-mode-setup --enable command does not work, because the /boot directory is on a different partition. System boots successfully if FIPS is disabled. Currently, there is no workaround available. Note You can manually enable FIPS after installing the image by using the fips-mode-setup --enable command. Jira:RHEL-4649 Driver disk menu fails to display user inputs on the console When you start RHEL installation using the inst.dd option on the kernel command line with a driver disk, the console fails to display the user input. Consequently, it appears that the application does not respond to the user input and stops responding, but displays the output which is confusing for users. However, this behavior does not affect the functionality, and user input gets registered after pressing Enter . As a workaround, to see the expected results, ignore the absence of user inputs in the console and press Enter when you finish adding inputs. Jira:RHEL-4737 Kickstart installation fails due to missing packages with systemd service files in %packages section If the Kickstart file uses the services --enabled=... directive to enable systemd services and packages containing the specified service file are not included in the %packages section, the RHEL installation process fails with the following error: To work around this problem, include the package with the service file in Kickstart's %packages section. As a result, RHEL installation completes, enabling expected services during installation. Jira:RHEL-9633 [1] bootc-image-builder does not support building images from private registries Currently, you cannot build base disk images which come from private registries by using bootc-image-builder . To work around this issue, copy the private registry into your localhost, then build the image with the following arguments: --local localhost/<image name>:tag as the image For example, to build your image: Jira:RHEL-34054 Stale network link configuration files render your OS unbootable The RHEL installer creates stale /etc/systemd/network/ link configuration files during the installation. These files map interface names to MAC addresses and cause problems when network configurations change. Specifically, the outdated files interfere with the intended network settings. This leads to an unbootable system if the boot is from NVMe over TCP. To work around this problem, manually remove /etc/systemd/network/10-anaconda-ifname-nbft*.link files and regenerate the initramfs temporary root filesystem by running the dracut -f command. Jira:RHELDOCS-18924 RHEL installer does not automatically discover or use iSCSI devices as boot devices on aarch64 The absence of the iscsi_ibft kernel module in RHEL installers running on aarch64 prevents automatic discovery of iSCSI devices defined in firmware. These devices are not automatically visible in the installer nor selectable as boot devices when added manually by using the GUI. As a workaround, add the "inst.nonibftiscsiboot" parameter to the kernel command line when booting the installer and then manually attach iSCSI devices through the GUI. As a result, the installer can recognize the attached iSCSI devices as bootable and installation completes as expected. For more information, see KCS solution . Jira:RHEL-56135 Kickstart installation fails with an unknown disk error when 'ignoredisk' command precedes 'iscsi' command Installing RHEL by using the kickstart method fails if the ignoredisk command is placed before the iscsi command. This issue occurs because the iscsi command attaches the specified iSCSI device during command parsing, while the ignoredisk command resolves device specifications simultaneously. If the ignoredisk command references an iSCSI device name before it is attached by the iscsi command, the installation fails with an "unknown disk" error. As a workaround, ensure that the iscsi command is placed before the ignoredisk command in the Kickstart file to reference the iSCSI disk and enable successful installation. Jira:RHEL-13837 The services Kickstart command fails to disable the firewalld service A bug in Anaconda prevents the services --disabled=firewalld command from disabling the firewalld service in Kickstart. To work around this problem, use the firewall --disabled command instead. As a result, the firewalld service is disabled properly. Jira:RHEL-82566 11.2. Security OpenSSL does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL library fail to work with an RSA key if the key is held by the PKCS #11 token. As a result, TLS communication fails in the described scenario. To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available. Bugzilla:1681178 [1] OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. Bugzilla:1685470 [1] With a specific syntax, scp empties files copied to themselves The scp utility changed from the Secure copy protocol (SCP) to the more secure SSH file transfer protocol (SFTP). Consequently, copying a file from a location to the same location erases the file content. The problem affects the following syntax: scp localhost:/myfile localhost:/myfile To work around this problem, do not copy files to a destination that is the same as the source location using this syntax. The problem has been fixed for the following syntaxes: scp /myfile localhost:/myfile scp localhost:~/myfile ~/myfile Bugzilla:2056884 The OSCAP Anaconda add-on does not fetch tailored profiles in the graphical installation The OSCAP Anaconda add-on does not provide an option to select or deselect tailoring of security profiles in the RHEL graphical installation. Starting from RHEL 8.8, the add-on does not take tailoring into account by default when installing from archives or RPM packages. Consequently, the installation displays the following error message instead of fetching an OSCAP tailored profile: To work around this problem, you must specify paths in the %addon org_fedora_oscap section of your Kickstart file, for example: As a result, you can use the graphical installation for OSCAP tailored profiles only with the corresponding Kickstart specifications. Jira:RHEL-1824 Ansible remediations require additional collections With the replacement of Ansible Engine by the ansible-core package, the list of Ansible modules provided with the RHEL subscription is reduced. As a consequence, running remediations that use Ansible content included within the scap-security-guide package requires collections from the rhc-worker-playbook package. For an Ansible remediation, perform the following steps: Install the required packages: Navigate to the /usr/share/scap-security-guide/ansible directory: Run the relevant Ansible Playbook using environment variables that define the path to the additional Ansible collections: # ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml Replace cis_server_l1 with the ID of the profile against which you want to remediate the system. As a result, the Ansible content is processed correctly. Note Support of the collections provided in rhc-worker-playbook is limited to enabling the Ansible content sourced in scap-security-guide . Jira:RHEL-1800 Keylime does not accept concatenated PEM certificates When Keylime receives a certificate chain as multiple certificates in the PEM format concatenated in a single file, the keylime-agent-rust Keylime component does not correctly use all the provided certificates during signature verification, resulting in a TLS handshake failure. As a consequence, the client components ( keylime_verifier and keylime_tenant ) cannot connect to the Keylime agent. To work around this problem, use just one certificate instead of multiple certificates. Jira:RHELPLAN-157225 [1] Keylime refuses runtime policies whose digests start with a backslash The current script for generating runtime policies, create_runtime_policy.sh , uses SHA checksum functions, for example, sha256sum , to compute the file digest. However, when the input file name contains a backslash or \n , the checksum function adds a backslash before the digest in its output. In such cases, the generated policy file is malformed. When provided with the malformed policy file, the Keylime tenant produces the following or similar error message: me.tenant - ERROR - Response code 400: Runtime policy is malformatted . To work around the problem, remove the backslash from the malformed policy file manually by entering the following command: sed -i 's/^\\//g' <malformed_file_name> . Jira:RHEL-11867 [1] Keylime agent rejects requests from the verifier after update When the API version number of the Keylime agent ( keylime-agent-rust ) has been updated, the agent rejects requests that use a different version. As a consequence, if a Keylime agent is added to a verifier and then updated, the verifier tries to contact the agent using the old API version. The agent rejects this request and fails the attestation. To work around this problem, update the verifier ( keylime-verifier ) before updating the agent ( keylime-agent-rust ). As a result, when the agents are updated, the verifier detects the API change and updates its stored data accordingly. Jira:RHEL-1518 [1] Missing files in trustdb cause denials for fapolicyd When fapolicyd is installed with the Ansible DISA STIG profile, a race condition causes the trustdb database to be out of sync with the rpmdb database. As a consequence, missing files in trustdb cause denials on the system. To work around this problem, restart fapolicyd or run the Ansible DISA STIG profile again. Jira:RHEL-24345 [1] The fapolicyd utility incorrectly allows executing changed files Correctly, the IMA hash of a file should update after any change to the file, and fapolicyd should prevent execution of the changed file. However, this does not happen due to differences in IMA policy setup and in file hashing by the evctml utility. As a result, the IMA hash is not updated in the extended attribute of a changed file. Consequently, fapolicyd incorrectly allows the execution of the changed file. Jira:RHEL-520 [1] Default SELinux policy allows unconfined executables to make their stack executable The default state of the selinuxuser_execstack boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off . Bugzilla:2064274 SSH timeout rules in STIG profiles configure incorrect options An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles: DISA STIG for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig ) DISA STIG with GUI for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig_gui ) In each of these profiles, the following two rules are affected: When applied to SSH servers, each of these rules configures an option ( ClientAliveCountMax and ClientAliveInterval ) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed. Bugzilla:2038978 GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures. To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the insecure SHA-1 signatures. Bugzilla:2070722 OpenSCAP memory-consumption problems On systems with limited memory, the OpenSCAP scanner might stop prematurely or it might not generate the results files. To work around this problem, you can customize the scanning profile to deselect rules that involve recursion over the entire / file system: rpm_verify_hashes rpm_verify_permissions rpm_verify_ownership file_permissions_unauthorized_world_writable no_files_unowned_by_user dir_perms_world_writable_system_owned file_permissions_unauthorized_suid file_permissions_unauthorized_sgid file_permissions_ungroupowned dir_perms_world_writable_sticky_bits For more details and more workarounds, see the related Knowledgebase article . Bugzilla:2161499 Remediating service-related rules during Kickstart installations might fail During a Kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a noncompliant state. As a workaround, you can scan and remediate the system after the Kickstart installation. This will fix the service-related issues. Jira:RHELPLAN-44202 [1] Interoperability of FIPS:OSPP hosts impacted due to CNSA 1.0 The OSPP subpolicy has been aligned with Commercial National Security Algorithm (CNSA) 1.0. This affects the interoperability of hosts that use the FIPS:OSPP policy-subpolicy combination, with the following major aspects: Minimum RSA key size is mandated at 3072 bits. Algorithm negotiations no longer support AES-128 ciphers, the secp256r1 elliptic curve, and the FFDHE-2048 group. Jira:RHEL-2735 [1] Missing rules in the SELinux policy block permissions to SQL databases Missing permission rules from the SELinux policy block connections to SQL databases. Consequently, the FIDO Device Onboard (FDO) services fdo-manufacturing-server.service , fdo-owner-onboarding-server.service , and fdo-rendezvous-server.service cannot connect to FDO databases, such as PostgreSQL and SQLite. Therefore, the system cannot start the FDO by using the supported databases for credentials and other parameters, such as storing ownership vouchers. You can work around this problem by performing the following steps: Create a new file named local_fdo_update.cil and enter the missing SELinux policy rules: Install the policy module package: As a consequence, FDO can connect to the PostgreSQL database and also fix problems related to SQLite permissions over /var/lib/fdo/ , where the SQLite database files are expected to be located. Jira:RHEL-28814 OpenSSH no longer logs timeout before authentication OpenSSH does not record a timeout before authentication for USDIP port USDPORT to the log. This might be important because the Fail2Ban intrusion prevention daemon and similar systems use these log records in its mdre-ddos regular expression and no longer ban the IPs of clients that attempt this type of attack. There is currently no known workaround for this problem. Jira:RHEL-45727 OpenSSH in RHEL 9.0-9.3 is not compatible with OpenSSL 3.2.2 The openssh packages provided by RHEL 9.0, 9.1, 9.2, and 9.3 strictly check for the OpenSSL version. Consequently, if you upgrade the openssl packages to version 3.2.2 and higher and you keep the openssh packages in version 8.7p1-34.el9_3.3 or earlier, the sshd service fails to start with an OpenSSL version mismatch error message. To work around this problem, upgrade the openssh packages to version 8.7p1-38.el9 and later. See the sshd not working, OpenSSL version mismatch solution (Red Hat Knowledgebase) for more information. Jira:RHELDOCS-19626 11.3. RHEL for Edge The open-vm-tools package is not available in the edge-vsphere image Currently, the open-vm-tools package is not installed by default in the edge-vsphere image. To work around this issue, include the package in the blueprint customization. When using the edge-vsphere image type, add the open-vm-tools in the blueprint for the RHEL for Edge Container image or the RHEL for Edge Commit image. Jira:RHELDOCS-16574 [1] 11.4. Software management The Installation process sometimes becomes unresponsive When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log file displays the following message at the end: To work around this problem, restart the installation process. Bugzilla:2073510 Running createrepo_c on local repositories generates duplicate repodata files When you run the createrepo_c command on local repositories, it generates duplicate copies of repodata files, one of the copies is compressed and one is not. There is no workaround available, however, you can safely ignore the duplicate files. The createrepo_c command generates duplicate copies because of requirements and differences in other tools relying on repositories created by using createrepo_c . Bugzilla:2056318 11.5. Shells and command-line tools Renaming network interfaces using ifcfg files fails On RHEL 9, the initscripts package is not installed by default. Consequently, renaming network interfaces using ifcfg files fails. To solve this problem, Red Hat recommends that you use udev rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the systemd.link(5) man page. If you cannot use one of the recommended solutions, install the initscripts package. Bugzilla:2018112 [1] The chkconfig package is not installed by default in RHEL 9 The chkconfig package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9. To manage services, use the systemctl commands or install the chkconfig package manually. For more information about systemd , see Introduction to systemd . For instructions on how to use the systemctl utility, see Managing system services with systemctl . Bugzilla:2053598 [1] The initscripts package is not installed by default By default, the initscripts package is not installed. As a consequence, the ifup and ifdown utilities are not available. As an alternative, use the nmcli connection up and nmcli connection down commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown package, which provides a NetworkManager solution for the ifup and ifdown utilities. Bugzilla:2082303 Setting the console keymap requires the libxkbcommon library on your minimal install In RHEL 9, certain systemd library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb command fails. To work around this problem, install the libxkbcommon library: Jira:RHEL-6105 The %vmeff metric from the sysstat package displays incorrect values The sysstat package provides the %vmeff metric to measure the page reclaim efficiency. The values of the %vmeff column returned by the sar -B command are incorrect because sysstat does not parse all relevant /proc/vmstat values provided by later kernel versions. To work around this problem, you can calculate the %vmeff value manually from the /proc/vmstat file. For details, see Why the sar(1) tool reports %vmeff values beyond 100 % in RHEL 8 and RHEL 9? Jira:RHEL-12009 The Service Location Protocol (SLP) is vulnerable to an attack through UDP The OpenSLP provides a dynamic configuration mechanism for applications in local area networks, such as printers and file servers. However, SLP is vulnerable to a reflective denial of service amplification attack through UDP on systems connected to the internet. SLP allows an unauthenticated attacker to register new services without limits set by the SLP implementation. By using UDP and spoofing the source address, an attacker can request the service list, creating a Denial of service on the spoofed address. To prevent external attackers from accessing the SLP service, disable SLP on all systems running on untrusted networks, such as those directly connected to the internet. Alternatively, to work around this problem, configure firewalls to block or filter traffic on UDP and TCP port 427. Jira:RHEL-6995 [1] The ReaR rescue image on UEFI systems with Secure Boot enabled fails to boot with the default settings ReaR image creation by using the rear mkrescue or rear mkbackup command fails with the following message: The missing files are part of the grub2-efi-x64-modules package. If you install this package, the rescue image is created successfully without any errors. When the UEFI Secure Boot is enabled, the rescue image is not bootable because it uses a boot loader that is not signed. To work around this problem, add the following variables to the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file): With the suggested workaround, the image can be produced successfully even on systems without the grub2-efi-x64-modules package, and it is bootable on systems with Secure Boot enabled. In addition, during the system recovery, the bootloader of the recovered system is set to the EFI shim bootloader. For more information about UEFI , Secure Boot , and shim bootloader , see the UEFI: what happens when booting the system Knowledge Base article. Jira:RHELDOCS-18064 [1] The %util column produced by sar and iostat utilities is invalid When you collect system usage statistics by using the sar or iostat utilities, the %util column produced by sar or iostat might contain invalid data. Jira:RHEL-26275 [1] The lsb-release binary is not available in RHEL 9 The information in /etc/os-release was previously available by calling the lsb-release binary. This binary was included in the redhat-lsb package , which was removed in RHEL 9. Now, you can display information about the operating system, such as the distribution, version, code name, and associated metadata, by reading the /etc/os-release file. This file is provided by Red Hat and any changes to it will be overwritten with each update of the redhat-release package. The format of the file is KEY=VALUE , and you can safely source the data for a shell script. Jira:RHELDOCS-16427 [1] 11.6. Infrastructure services Both bind and unbound disable validation of SHA-1-based signatures The bind and unbound components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy. As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable. To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys. For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution. Bugzilla:2070495 named fails to start if the same writable zone file is used in multiple zones BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named service, named fails to start. To work around this problem, use the in-view clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path. Note that writable zone files are typically used in zones with allowed dynamic updates, secondary zones, or zones maintained by DNSSEC. Bugzilla:1984982 libotr is not compliant with FIPS The libotr library and toolkit for off-the-record (OTR) messaging provides end-to-end encryption for instant messaging conversations. However, the libotr library does not conform to the Federal Information Processing Standards (FIPS) due to its use of the gcry_pk_sign() and gcry_pk_verify() functions. As a result, you cannot use the libotr library in FIPS mode. Bugzilla:2086562 11.7. Networking kTLS does not support offloading of TLS 1.3 to NICs Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded. Bugzilla:2000616 [1] Failure to update the session key causes the connection to break Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key. Bugzilla:2013650 [1] 11.8. Kernel Customer applications with dependencies on kernel page size might need updating when moving from 4k to 64k page size kernel RHEL is compatible with both 4k and 64k page size kernels. Customer applications with dependencies on a 4k kernel page size might require updating when moving from 4k to 64k page size kernels. Known instances of this include jemalloc and dependent applications. The jemalloc memory allocator library is sensitive to the page size used in the system's runtime environment. The library can be built to be compatible with 4k and 64k page size kernels, for example, when configured with --with-lg-page=16 or env JEMALLOC_SYS_WITH_LG_PAGE=16 (for jemallocator Rust crate). Consequently, a mismatch can occur between the page size of the runtime environment and the page size that was present when compiling binaries that depend on jemalloc . As a result, using a jemalloc -based application triggers the following error: To avoid this problem, use one of the following approaches: Use the appropriate build configuration or environment options to create 4k and 64k page size compatible binaries. Build any user space packages that use jemalloc after booting into the final 64k kernel and runtime environment. For example, you can build the fd-find tool, which also uses jemalloc , with the cargo Rust package manager. In the final 64k environment, trigger a new build of all dependencies to resolve the mismatch in the page size by entering the cargo command: Bugzilla:2167783 [1] Upgrading to the latest real-time kernel with dnf does not install multiple kernel versions in parallel Installing the latest real-time kernel with the dnf package manager requires resolving package dependencies to retain the new and current kernel versions simultaneously. By default, dnf removes the older kernel-rt package during the upgrade. As a workaround, add the current kernel-rt package to the installonlypkgs option in the /etc/yum.conf configuration file, for example, installonlypkgs=kernel-rt . The installonlypkgs option appends kernel-rt to the default list used by dnf . Packages listed in installonlypkgs directive are not removed automatically and therefore support multiple kernel versions to install simultaneously. Note that having multiple kernels installed is a way to have a fallback option when working with a new kernel version. Bugzilla:2181571 [1] The Delay Accounting functionality does not display the SWAPIN and IO% statistics columns by default The Delayed Accounting functionality, unlike early versions, is disabled by default. Consequently, the iotop application does not show the SWAPIN and IO% statistics columns and displays the following warning: The Delay Accounting functionality, using the taskstats interface, provides the delay statistics for all tasks or threads that belong to a thread group. Delays in task execution occur when they wait for a kernel resource to become available, for example, a task waiting for a free CPU to run on. The statistics help in setting a task's CPU priority, I/O priority, and rss limit values appropriately. As a workaround, you can enable the delayacct boot option either at run time or boot. To enable delayacct at run time, enter: Note that this command enables the feature system wide, but only for the tasks that you start after running this command. To enable delayacct permanently at boot, use one of the following procedures: Edit the /etc/sysctl.conf file to override the default parameters: Add the following entry to the /etc/sysctl.conf file: For more information, see How to set sysctl variables on Red Hat Enterprise Linux . Reboot the system for changes to take effect. Add the delayacct option to the kernel command line. For more information, see Configuring kernel command-line parameters . As a result, the iotop application displays the SWAPIN and IO% statistics columns. Bugzilla:2132480 [1] Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew_tick=1 boot parameter Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock , which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter. To avoid lock conflicts, enable skew_tick=1 : Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Verify the new settings by displaying the kernel parameters you pass during boot. Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads. Jira:RHEL-9318 [1] The kdump mechanism fails to capture the vmcore file on LUKS-encrypted targets When running kdump on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file on the LUKS-encrypted targets. As a workaround, query the Recommended crashkernel value and gradually increase the memory size to an appropriate value. The Recommended crashkernel value can serve as reference to set the required memory size. Print the estimate crash kernel value. Configure the amount of required memory by increasing the crashkernel value. Reboot the system for changes to take effect. As a result, kdump works correctly on systems with LUKS-encrypted partitions. Jira:RHEL-11196 [1] The kdump service fails to build the initrd file on IBM Z systems On the 64-bit IBM Z systems, the kdump service fails to load the initial RAM disk ( initrd ) when znet related configuration information such as s390-subchannels reside in an inactive NetworkManager connection profile. Consequently, the kdump mechanism fails with the following error: As a workaround, use one of the following solutions: Configure a network bond or bridge by re-using the connection profile that has the znet configuration information: Copy the znet configuration information from the inactive connection profile to the active connection profile: Run the nmcli command to query the NetworkManager connection profiles: Update the active profile with configuration information from the inactive connection: Restart the kdump service for changes to take effect: Bugzilla:2064708 The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4 After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 9.1 and later, the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards might not work and display the error message: An unconfirmed workaround is to power off the system and back on again. Do not reboot. Bugzilla:2129288 [1] weak-modules from kmod fails to work with module inter-dependencies The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario. To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel. Bugzilla:2103605 [1] The Intel(R) i40e adapter permanently fails on IBM Power10 When the i40e adapter encounters an I/O error on IBM Power10 systems, the Enhanced I/O Error Handling (EEH) kernel services trigger the network driver's reset and recovery. However, EEH repeatedly reports I/O errors until the i40e driver reaches the predefined maximum of EEH stops responding. As a consequence, EEH causes the device to fail permanently. Jira:RHEL-15404 [1] dkms provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs The Dynamic Kernel Module Support ( dkms ) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kB and 64 kB page sizes. As a result, when the kernel update is performed and the kernel-64k-devel package is not installed, dkms provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers package, which contains header files for both types of ARM CPU architectures and is not specific to dkms and its requirements. Jira:RHEL-25967 [1] 11.9. File systems and storage Device Mapper Multipath is not supported with NVMe/TCP Using Device Mapper Multipath with the nvme-tcp driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath tools with NVMe. By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices . Bugzilla:2033080 [1] The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. Bugzilla:2011699 [1] Disabling quota accounting is no longer possible for an XFS filesystem mounted with quotas enabled Starting with RHEL 9.2, it is no longer possible to disable quota accounting on an XFS filesystem which has been mounted with quotas enabled. To work around this issue, disable quota accounting by remounting the filesystem, with the quota option removed. Bugzilla:2160619 [1] udev rule change for NVMe devices There is a udev rule change for NVMe devices that adds OPTIONS="string_escape=replace" parameter. This leads to a disk by-id naming change for some vendors, if the serial number of your device has leading whitespace. Bugzilla:2185048 NVMe/FC devices cannot be reliably used in a Kickstart file NVMe/FC devices can be unavailable during parsing or execution of pre-scripts of the Kickstart file, which can cause the Kickstart installation to fail. To work around this issue, update the boot argument to inst.wait_for_disks=30 . This option causes a delay of 30 seconds, and should provide enough time for the NVMe/FC device to connect. With this workaround along with the NVMe/FC devices connecting in time, the Kickstart installation proceeds without issues. Jira:RHEL-8164 [1] Kernel panic while using the qedi driver While using the qedi iSCSI driver, the kernel panics after OS boots. To work around this issue, disable the kfence runtime memory error detector feature by adding kfence.sample_interval=0 to the kernel boot command line. Jira:RHEL-8466 [1] ARM-based systems fail to update with a 64k page size kernel when vdo is installed While installing the vdo package, RHEL installs the kmod-kvdo package and a kernel with 4k page size as dependencies. As a consequence, updates from RHEL 9.3 to 9.x fail because kmod-kvdo conflicts with the 64k kernel. To work around this issue, remove the vdo package and its dependencies before attempting to update. Jira:RHEL-8354 lldpad is auto enabled even for qedf adapters When using a QLogic Corp. FastLinQ QL45000 Series 10/25/40/50GbE, FCOE Controller automatically enables the lldpad daemon on systems running Red Hat Virtualization. As a consequence, I/O operations are aborted with an error, for example, [qedf_eh_abort:xxxx]:1: Aborting io_req=ff5d85a9dcf3xxxx . To work around this problem, disableLink Layer Discovery Protocol (LLDP) and then enable it for interfaces that can be set on the vdsm configuration level. For more information, https://access.redhat.com/solutions/6963195 . Jira:RHEL-8104 [1] System fails to boot when iommu is enabled By enabling the Input-Output Memory Management Unit (IOMMU) on AMD platforms when the BNX2I adapter is in use, a system fails to boot with the Direct Memory Access Remapping (DMAR) timeout errors. To work around this problem, disable the IOMMU before booting by using the kernel command-line option, iommu=off . As a result, the system boots without any errors. Jira:RHEL-25730 [1] 11.10. Dynamic programming languages, web and database servers Git fails to clone or fetch from repositories with potentially unsafe ownership To prevent remote code execution and mitigate CVE-2024-32004 , stricter ownership checks have been introduced in Git for cloning local repositories. Since the update introduced in the RHSA-2024:4083 advisory, Git treats local repositories with potentially unsafe ownership as dubious. As a consequence, if you attempt to clone from a repository locally hosted through git-daemon and you are not the owner of the repository, Git returns a security alert about dubious ownership and fails to clone or fetch from the repository. To work around this problem, explicitly mark the repository as safe by executing the following command: Jira:RHELDOCS-18435 [1] python3.11-lxml does not provide the lxml.isoschematron submodule The python3.11-lxml package is distributed without the lxml.isoschematron submodule because it is not under an open source license. The submodule implements ISO Schematron support. As an alternative, pre-ISO-Schematron validation is available in the lxml.etree.Schematron class. The remaining content of the python3.11-lxml package is unaffected. Bugzilla:2157708 The --ssl-fips-mode option in MySQL and MariaDB does not change FIPS mode The --ssl-fips-mode option in MySQL and MariaDB in RHEL works differently than in upstream. In RHEL 9, if you use --ssl-fips-mode as an argument for the mysqld or mariadbd daemon, or if you use ssl-fips-mode in the MySQL or MariaDB server configuration files, --ssl-fips-mode does not change FIPS mode for these database servers. Instead: If you set --ssl-fips-mode to ON , the mysqld or mariadbd server daemon does not start. If you set --ssl-fips-mode to OFF on a FIPS-enabled system, the mysqld or mariadbd server daemons still run in FIPS mode. This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components. Therefore, do not use the --ssl-fips-mode option in MySQL or MariaDB in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system: Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode . Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode . Bugzilla:1991500 11.11. Identity Management MIT Kerberos does not support ECC certificates for PKINIT MIT Kerberos does not implement the RFC5349 request for comments document, which describes the design of elliptic-curve cryptography (ECC) support in Public Key Cryptography for initial authentication (PKINIT). Consequently, the MIT krb5-pkinit package, used by RHEL, does not support ECC certificates. For more information, see Elliptic Curve Cryptography (ECC) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) . Jira:RHEL-4902 The DEFAULT:SHA1 subpolicy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm. However, the Active Directory (AD) Kerberos Distribution Center (KDC) still uses the SHA-1 digest algorithm to sign CMS messages. As a result, RHEL 9 Kerberos clients fail to authenticate users by using PKINIT against an AD KDC. To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command: Bugzilla:2060798 The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL-9 and non-AD Kerberos agent If a RHEL 9 Kerberos agent, either a client or Kerberos Distribution Center (KDC), interacts with a non-RHEL-9 Kerberos agent that is not an Active Directory (AD) agent, the PKINIT authentication of the user fails. To work around the problem, perform one of the following actions: Set the RHEL 9 agent's crypto-policy to DEFAULT:SHA1 to allow the verification of SHA-1 signatures: Update the non-RHEL-9 and non-AD agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos client or KDC packages to the versions that use SHA-256 instead of SHA-1: CentOS 9 Stream: krb5-1.19.1-15 RHEL 8.7: krb5-1.18.2-17 RHEL 7.9: krb5-1.15.1-53 Fedora Rawhide/36: krb5-1.19.2-7 Fedora 35/34: krb5-1.19.2-3 As a result, the PKINIT authentication of the user works correctly. Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1. See also The DEFAULT:SHA1 subpolicy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs . Jira:RHEL-4875 Heimdal client fails to authenticate a user using PKINIT against RHEL 9 KDC By default, a Heimdal Kerberos client initiates the PKINIT authentication of an IdM user by using Modular Exponential (MODP) Diffie-Hellman Group 2 for Internet Key Exchange (IKE). However, the MIT Kerberos Distribution Center (KDC) on RHEL 9 only supports MODP Group 14 and 16. Consequently, the pre-autentication request fails with the krb5_get_init_creds: PREAUTH_FAILED error on the Heimdal client and Key parameters not accepted on the RHEL MIT KDC. To work around this problem, ensure that the Heimdal client uses MODP Group 14. Set the pkinit_dh_min_bits parameter in the libdefaults section of the client configuration file to 1759: As a result, the Heimdal client completes the PKINIT pre-authentication against the RHEL MIT KDC. Jira:RHEL-4889 IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate. Jira:RHEL-12154 [1] Users without SIDs cannot log in to IdM after an upgrade After upgrading your IdM replica to RHEL 9.2, the IdM Kerberos Distribution Center (KDC) might fail to issue ticket-granting tickets (TGTs) to users who do not have Security Identifiers (SIDs) assigned to their accounts. Consequently, the users cannot log in to their accounts. To work around the problem, generate SIDs by running the following command as an IdM administrator on another IdM replica in the topology: Afterward, if users still cannot log in, examine the Directory Server error log. You might have to adjust ID ranges to include user POSIX identities. See the When upgrading to RHEL9, IDM users are not able to login anymore Knowledgebase solution for more information. Jira:RHELPLAN-157939 [1] Migrated IdM users might be unable to log in due to mismatching domain SIDs If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs . Jira:RHELPLAN-109613 [1] MIT krb5 user fails to obtain an AD TGT because of incompatible encryption types generating the user PAC In MIT krb5 1.20 and later packages, a Privilege Attribute Certificate (PAC) is included in all Kerberos tickets by default. The MIT Kerberos Distribution Center (KDC) selects the strongest encryption type available to generate the KDC checksum in the PAC, which currently is the AES HMAC-SHA2 encryption types defined in RFC8009. However, Active Directory (AD) does not support this RFC. Consequently, in an AD-MIT cross-realm setup, an MIT krb5 user fails to obtain an AD ticket-granting ticket (TGT) because the cross-realm TGT generated by MIT KDC contains an incompatible KDC checksum type in the PAC. To work around the problem, set the disable_pac parameter to true for the MIT realm in the [realms] section of the /var/kerberos/krb5kdc/kdc.conf configuration file. As a result, the MIT KDC generates tickets without PAC, which means that AD skips the failing checksum verification and an MIT krb5 user can obtain an AD TGT. Bugzilla:2016312 Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. Jira:RHELPLAN-155168 [1] Adding a RHEL 9 replica in FIPS mode to an IdM deployment in FIPS mode that was initialized with RHEL 8.6 or earlier fails The default RHEL 9 FIPS cryptographic policy aiming to comply with FIPS 140-3 does not allow the use of the AES HMAC-SHA1 encryption types' key derivation function as defined by RFC3961, section 5.1. This constraint is a blocker when adding a RHEL 9 Identity Management (IdM) replica in FIPS mode to a RHEL 8 IdM environment in FIPS mode in which the first server was installed on a RHEL 8.6 system or earlier. This is because there are no common encryption types between RHEL 9 and the RHEL versions, which commonly use the AES HMAC-SHA1 encryption types but do not use the AES HMAC-SHA2 encryption types. You can view the encryption type of your IdM master key by entering the following command on the server: For more information, see the AD Domain Users unable to login in to the FIPS-compliant environment KCS solution. Jira:RHEL-4888 SSSD registers the DNS names properly Previously, if the DNS was set up incorrectly, SSSD always failed the first attempt to register the DNS name. To work around the problem, this update provides a new parameter dns_resolver_use_search_list . Set dns_resolver_use_search_list = false to avoid using the DNS search list. Bugzilla:1608496 [1] Installing a RHEL 7 IdM client with a RHEL 9.2 and later IdM server in FIPS mode fails due to EMS enforcement The TLS Extended Master Secret (EMS) extension (RFC 7627) is now mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9.2 and later systems. This is in accordance with FIPS-140-3 requirements. However, the openssl version available in RHEL 7.9 and lower does not support EMS. In consequence, installing a RHEL 7 Identity Management (IdM) client with a FIPS-enabled IdM server running on RHEL 9.2 and later fails. If upgrading the host to RHEL 8 before installing an IdM client on it is not an option, work around the problem by removing the requirement for EMS usage on the RHEL 9 server by applying a NO-ENFORCE-EMS subpolicy on top of the FIPS crypto policy: Note that this removal goes against the FIPS 140-3 requirements. As a result, you can establish and accept TLS 1.2 connections that do not use EMS, and the installation of a RHEL 7 IdM client succeeds. Jira:RHEL-4955 The online backup and the online automembership rebuild tasks can acquire two locks resulting in a deadlock If the online backup and the online automembership rebuild tasks attempt to acquire the same two locks in the opposite order, it can lead to an unrecoverable deadlock that requires you to stop and restart the server. To work around this problem, do not launch the online backup and the online automembership rebuild tasks in parallel. Jira:RHELDOCS-18065 [1] SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. Jira:RHELDOCS-19603 11.12. Desktop VNC is not running after upgrading to RHEL 9 After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled. To work around the problem, manually enable the vncserver service after the system upgrade: As a result, VNC is now enabled and starts after every system boot as expected. Bugzilla:2060308 User Creation screen is unresponsive When installing RHEL using a graphical user interface, the User Creation screen is unresponsive. As a consequence, creating users during installation is more difficult. To work around this problem, use one of the following solutions to create users: Run the installation in VNC mode and resize the VNC window. Create users after completing the installation process. Jira:RHEL-11924 [1] WebKitGTK fails to display web pages on IBM Z The WebKitGTK web browser engine fails when trying to display web pages on the IBM Z architecture. The web page remains blank and the WebKitGTK process ends unexpectedly. As a consequence, you cannot use certain features of applications that use WebKitGTK to display web pages, such as the following: The Evolution mail client The GNOME Online Accounts settings The GNOME Help application Jira:RHEL-4157 11.13. Graphics infrastructures NVIDIA drivers might revert to X.org Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the X.org display server: If the version of the NVIDIA driver is lower than 470. If the system is a laptop that uses hybrid graphics. If you have not enabled the required NVIDIA driver options. Additionally, Wayland is enabled but the desktop session uses X.org by default if the version of the NVIDIA driver is lower than 510. Jira:RHELPLAN-119001 [1] Night Light is not available on Wayland with NVIDIA When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light . Jira:RHELPLAN-119852 [1] X.org configuration utilities do not work under Wayland X.org utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. Jira:RHELPLAN-121049 [1] 11.14. The web console VNC console in the RHEL web console does not work correctly on ARM64 Currently, when you import a virtual machine (VM) in the RHEL web console on ARM64 architecture and then you try to interact with it in the VNC console, the console does not react to your input. Additionally, when you create a VM in the web console on ARM64 architecture, the VNC console does not display the last lines of your input. Jira:RHEL-31993 [1] 11.15. Red Hat Enterprise Linux system roles If firewalld.service is masked, using the firewall RHEL system role fails If firewalld.service is masked on a RHEL system, the firewall RHEL system role fails. To work around this problem, unmask the firewalld.service : Bugzilla:2123859 Unable to register systems with environment names The rhc system role fails to register the system when specifying environment names in rhc_environment . As a workaround, use environment IDs instead of environment names while registering. Jira:RHEL-1172 Running Microsoft SQL Server 2022 in high-availability mode as an SELinux-confined application does not work Microsoft SQL Server 2022 on RHEL 9.4 and later supports running as an SELinux-confined application. However, due to a limitation in Microsoft SQL Server, running the service as an SELinux-confined application does not work in high-availability mode. To work around this problem, you can run Microsoft SQL Server as an unconfined application if you require the service to be high available. Note that this limitation also impacts installing Microsoft SQL Server when you use the mssql RHEL system role to install this service. Jira:RHELDOCS-17719 [1] Configuring the imuxsock input basics type causes a problem Configuring the imuxsock input basics type through the logging RHEL system role and consequently the use_imuxsock option causes a problem in the resulting configuration on the managed nodes. The role sets the name parameter, however, the imuxsock input type does not support the name parameter. As a result, the rsyslog logging utility prints the parameter 'name' not known - typo in config file? error. Jira:RHELDOCS-18329 [1] For RHEL 9 UEFI managed nodes the bootloader_password variable of the bootloader RHEL system role does not work Previously, the bootloader_password variable incorrectly placed the password information in the /boot/efi/EFI/redhat/user.cfg file. The proper location was the /boot/grub2/user.cfg file. Consequently, when you rebooted the managed node to modify any boot loader entry, GRUB2 did not prompt you for a password. To work around this problem, you can manually move the user.cfg file from the incorrect /boot/efi/EFI/redhat/ directory to the correct /boot/grub2/ directory to achieve the expected behavior. Jira:RHEL-45705 11.16. Virtualization Installing a virtual machine over https or ssh in some cases fails Currently, the virt-install utility fails when attempting to install a guest operating system (OS) from an ISO source over a https or ssh connection - for example using virt-install --cdrom https://example/path/to/image.iso . Instead of creating a virtual machine (VM), the described operation ends unexpectedly with an internal error: process exited while connecting to monitor message. Similarly, using the RHEL 9 web console to install a guest operating system fails and displays an Unknown driver 'https' error if you use an https or ssh URL, or the Download OS function. To work around this problem, install qemu-kvm-block-curl and qemu-kvm-block-ssh on the host to enable https and ssh protocol support. Alternatively, use a different connection protocol or a different installation source. Bugzilla:2014229 Using NVIDIA drivers in virtual machines disables Wayland Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios: When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM) When you assign an NVIDIA vGPU mediated device to a RHEL VM Jira:RHELPLAN-117234 [1] The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. Bugzilla:2077767 [1] A hostdev interface with failover settings cannot be hot-plugged after being hot-unplugged After removing a hostdev network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM. Jira:RHEL-7337 Live post-copy migration of VMs with failover VFs fails Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration. Jira:RHEL-7335 Host network cannot ping VMs with VFs during live migration When live migrating a virtual machine (VM) with a configured virtual function (VF), such as a VMs that uses virtual SR-IOV software, the network of the VM is not visible to other devices and the VM cannot be reached by commands such as ping . After the migration is finished, however, the problem no longer occurs. Jira:RHEL-7336 Disabling AVX causes VMs to become unbootable On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM. Bugzilla:2005173 [1] Windows VM fails to get IP address after network interface reset Sometimes, Windows virtual machines fail to get an IP address after an automatic network interface reset. As a consequence, the VM fails to connect to the network. To work around this problem, disable and re-enable the network adapter driver in the Windows Device Manager. Jira:RHEL-11366 Windows Server 2016 VMs sometimes stops working after hot-plugging a vCPU Currently, assigning a vCPU to a running virtual machine (VM) with a Windows Server 2016 guest operating system might cause a variety of problems, such as the VM stopping unexpectedly, becoming unresponsive, or rebooting. Bugzilla:1915715 Redundant error messages on VMs with NVIDIA passthrough devices When using an Intel host machine with a RHEL 9.2 and later operating system, virtual machines (VMs) with a passed through NVDIA GPU device frequently log the following error message: However, this error message does not impact the functionality of the VM and can be ignored. For details, see the Red Hat KnoweldgeBase . Bugzilla:2149989 [1] Restarting the OVS service on a host might block network connectivity on its running VMs When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets. This problem only affects systems that use the packed virtqueue format in their virtio networking stack. To work around this problem, use the packed=off parameter in the virtio networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM. Jira:RHEL-333 Recovering an interrupted post-copy VM migration might fail If a post-copy migration of a virtual machine (VM) is interrupted and then immediately resumed on the same incoming port, the migration might fail with the following error: Address already in use To work around this problem, wait at least 10 seconds before resuming the post-copy migration or switch to another port for migration recovery. Jira:RHEL-7096 NUMA node mapping not working correctly on AMD EPYC CPUs QEMU does not handle NUMA node mapping on AMD EPYC CPUs correctly. As a result, the performance of virtual machines (VMs) with these CPUs might be negatively impacted if using a NUMA node configuration. In addition, the VMs display a warning similar to the following during boot. To work around this issue, do not use AMD EPYC CPUs for NUMA node configurations. Bugzilla:2176010 NFS failure during VM migration causes migration failure and source VM coredump Currently, if the NFS service or server is shut down during virtual machine (VM) migration, the source VM's QEMU is unable to reconnect to the NFS server when it starts running again. As a result, the migration fails and a coredump is initiated on the source VM. Currently, there is no workaround available. Bugzilla:2058982 PCIe ATS devices do not work on Windows VMs When you configure a PCIe Address Translation Services (ATS) device in the XML configuration of virtual machine (VM) with a Windows guest operating system, the guest does not enable the ATS device after booting the VM. This is because Windows currently does not support ATS on virtio devices. For more information, see the Red Hat KnowledgeBase . Bugzilla:2073872 virsh blkiotune --weight command fails to set the correct cgroup I/O controller value Currently, using the virsh blkiotune --weight command to set the VM weight does not work as expected. The command fails to set the correct io.bfq.weight value in the cgroup I/O controller interface file. There is no workaround at this time. Bugzilla:1970830 Starting a VM with an NVIDIA A16 GPU sometimes causes the host GPU to stop working Currently, if you start a VM that uses an NVIDIA A16 GPU passthrough device, the NVIDIA A16 GPU physical device on the host system in some cases stops working. To work around the problem, reboot the hypervisor and set the reset_method for the GPU device to bus : For details, see the Red Hat Knowledgebase . Jira:RHEL-7212 [1] Windows VMs might become unresponsive due to storage errors On virtual machines (VMs) that use Windows guest operating systems, the system in some cases becomes unresponsive when under high I/O load. When this happens, the system logs a viostor Reset to device, \Device\RaidPort3, was issued error. Jira:RHEL-1609 [1] Windows 10 VMs with certain PCI devices might become unresponsive on boot Currently, a virtual machine (VM) that uses a Windows 10 guest operating system might become unresponsive during boot if a virtio-win-scsi PCI device with a local disk back end is attached to the VM. To work around the problem, boot the VM with the multi_queue option enabled. Jira:RHEL-1084 [1] Windows 11 VMs with a memory balloon device set might close unexpectedly during reboot Currently, rebooting virtual machines (VMs) that use a Windows 11 guest operating system and a memory balloon device in some cases fails with a DRIVER POWER STAT FAILURE blue-screen error. Jira:RHEL-935 [1] Resuming a postcopy VM migration fails in some cases Currently, when performing a postcopy migration of a virtual machine (VM), if a proxy network failure occurs during the RECOVER phase of the migration, the VM becomes unresponsive and the migration cannot be resumed. Instead, the recovery command displays the following error: Jira:RHEL-7115 The virtio balloon driver sometimes does not work on Windows 10 VMs Under certain circumstances, the virtio-balloon driver does not work correctly on virtual machines (VMs) that use a Windows 10 guest operating system. As a consequence, such VMs might not use their assigned memory efficiently. Jira:RHEL-12118 The virtio file system has suboptimal performance in Windows VMs Currently, when a virtio file system (virtiofs) is configured on a virtual machine (VM) that uses a Windows guest operating system, the performance of virtiofs in the VM is significantly worse than in VMs that use Linux guests. Jira:RHEL-1212 [1] Hot-unplugging a storage device on Windows VMs might fail On virtual machines (VMs) that use a Windows guest operating system, removing a storage device when the VM is running (also known as a device hot-unplug) in some cases fails. As a consequence, the storage device remains attached to the VM and the disk manager service might become unresponsive. Jira:RHEL-869 Hot plugging CPUs to a Windows VM might cause a system failure When hot plugging the maximum number of CPUs to a Windows virtual machine (VM) with huge pages enabled, the guest operating system might crash with the following Stop error : Jira:RHEL-1220 Updating virtio drivers on Windows VMs might fail When updating the KVM paravirtualized ( virtio ) drivers on a Windows virtual machine (VM), the update might cause the mouse to stop working and the newly installed drivers might not be signed. This problem occurs when updating the virtio drivers by installing from the virtio-win-guest-tools package, which is a part of the virtio-win.iso file. To work around this problem, update the virtio drivers by using Windows Device Manager. Jira:RHEL-574 [1] TX queue size cannot be changed in VMs that use vhost-kernel Currently, you cannot set up TX queue size on KVM virtual machines (VMs) that use vhost-kernel as a back end for the virtio network driver. As a consequence, you can use only the default value of 256 for the TX queue, which might prevent you from optimizing your VM network throughput. Jira:RHEL-1138 [1] Virtual machines incorrectly report an AMD SRSO vulnerability RHEL 9.4 virtual machines (VMs) running on a RHEL 9 host with the AMD Zen 3 and 4 CPU architecture incorrectly report a vulnerability to a Speculative Return Stack Overflow (SRSO) attack: The problem is caused by a missing cpuid flag and the vulnerability is in fact fully mitigated in VMs under the following conditions: You have the updated linux-firmware package on the host as described here: cve-2023-20569 . The host kernel has the mitigation enabled, which is the default behavior. If the mitigation is enabled, Safe RET is displayed in the lscpu command output on the host. Jira:RHEL-26152 [1] Virtual machines with a large amount of vCPUs and virtual disks might fail Currently, assigning a large amount of vCPUs and virtual disks to a RHEL virtual machine (VM) might cause the VM to fail to boot. To work around this problem, use Small Computer System Interface (SCSI) virtual storage devices instead of block devices if possible. For more details, see: Creating SCSI-based storage pools with vHBA devices by using the CLI If you need to use virtual block devices, you can also try to reduce the number of interrupt vectors by starting the VM with a -global virtio-blk-pci.vectors= <number-of-vectors> QEMU option. Try to find a sufficiently low number of interrupt vectors that allows the VM to boot successfully. Jira:RHEL-32990 [1] Link status shows up on VM, even when status is down of e1000e or igb model interface Before booting the VM, set the status of Ethernet link down for the e1000 or igb model network interface. Despite this, after the VM boots, the network interface keeps the up status, because when you set the status of Ethernet link down and then stop and re-start the VM, it is automatically set back to up . Consequently, the correct state of network interface is not maintained. As a workaround, set the network interface status to down inside the VM by using command: Alternatively, you can try to remove and add this network interface again while the VM is running. Jira:RHEL-21867 Using NBD to migrate a VM storage over a TLS connection does not work correctly Currently, when migrating a virtual machine (VM) and its storage device by using the Network Block Device (NBD) protocol over a TLS connection, a data race in the TLS handshake might make the migration appear to be successful. However, it causes the QEMU process on the destination VM to become unresponsive to further interactions. If you can trust your network, you can work around this problem by using plaintext rather than TLS connections for the NBD protocol, which is used during the VM storage migration. Jira:RHEL-33440 Kdump fails on virtual machines with AMD SEV-SNP Currently, kdump fails on RHEL 9 virtual machines (VMs) that use the AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature. Jira:RHEL-10019 [1] VMs incorrectly report the vulnerable status for spec_rstack_overflow parameter on the AMD EPYC model When you boot a host, it does not detect any vulnerabilities in the spec_rstack_overflow parameter. After querying the parameter for logs, it displays the message: After booting a VM on the same host, the VM detects a vulnerability in the spec_rstack_overflow parameter. And when you query the parameter for logs, it displays the message: However, this is a false warning message, and you can ignore the status of the /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow file inside the VM. Jira:RHEL-17614 [1] 11.17. RHEL in cloud environments Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur: After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode. A VM created by cloning cannot boot, and instead enters emergency mode. To work around these problems, do the following in emergency mode of the VM: Remove the LVM system devices file: rm /etc/lvm/devices/system.devices Re-create LVM device settings: vgimportdevices -a Reboot the VM This makes it possible for the cloned or restored VM to boot up correctly. Alternatively, to prevent the issue from occurring, do the following before cloning a VM or creating a VM snapshot: Uncomment the use_devicesfile = 0 line in the /etc/lvm/lvm.conf file Reboot the VM Bugzilla:2059545 [1] Customizing RHEL 9 guests on ESXi sometimes causes networking problems Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway. For details and workaround instructions, see the VMware Knowledge Base . Bugzilla:2037657 [1] RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file. Bugzilla:2081114 [1] Setting static IP in a RHEL virtual machine on a VMware host does not work Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. To work around this issue, see the VMware Knowledge Base . Jira:RHEL-12122 Large VMs might fail to boot into the debug kernel when the kmemleak option is enabled When attempting to boot a RHEL 9 virtual machine (VM) into the debug kernel, the booting might fail with the following error if the machine kernel is using the kmemleak=on argument. This problem affects mainly large VMs because they spend more time in the boot sequence. To work around the problem, edit the /etc/fstab file on the machine and add extra timeout options to the /boot and /boot/efi mount points. For example: Jira:RHELDOCS-16979 [1] 11.18. Supportability Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: Bugzilla:1869561 [1] 11.19. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: Jira:RHELPLAN-96940 [1] Root filesystem are not expanded by default When you use a base container image, that does not include cloud-init to create an AMI or QCOW2 container image by using bootc-image-builder , the root filesystem size is not expanded dynamically on boot to the full size of the provisioned virtual disk. To work around this issue, apply one of the following available options: Include cloud-init in the image. Include custom logic in the container image to expand the root filesystem, for example: Include a custom logic to use the additional space for secondary filesystems, for example, /var/lib/containers . Note By default, the physical root storage is mounted at the /sysroot partition. Jira:RHEL-33208 | [
"%pre wipefs -a /dev/sda %end",
"The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.",
"nmcli connection show",
"nmcli connection delete <connection_name>",
"Warning: /boot//.vmlinuz-<kernel version>.x86_64.hmac does not exist FATAL: FIPS integrity test failed Refusing to continue",
"Error enabling service <name_of_the_service>",
"sudo podman run --rm -it --privileged --pull=newer --security-opt label=type:unconfined_t -v ./config.toml:/config.toml -v ./output:/output -v /var/lib/containers/storage:/var/lib/containers/storage registry.redhat.io/rhel9/bootc-image-builder:latest --type qcow2 --local quay.io/<namespace>/<image>:<tag>",
"SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2",
"There was an unexpected problem with the supplied content.",
"xccdf-path = /usr/share/xml/scap/sc_tailoring/ds-combined.xml tailoring-path = /usr/share/xml/scap/sc_tailoring/tailoring-xccdf.xml",
"dnf install -y ansible-core scap-security-guide rhc-worker-playbook",
"cd /usr/share/scap-security-guide/ansible",
"ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml",
"Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-90271-8 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-90811-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout",
"(allow fdo_t etc_t (file (write))) (allow fdo_t fdo_conf_t (file (append create rename setattr unlink write ))) (allow fdo_t fdo_var_lib_t (dir (add_name remove_name write ))) (allow fdo_t fdo_var_lib_t (file (create setattr unlink write ))) (allow fdo_t krb5_keytab_t (dir (search))) (allow fdo_t postgresql_port_t (tcp_socket (name_connect))) (allow fdo_t sssd_t (unix_stream_socket (connectto))) (allow fdo_t sssd_var_run_t (sock_file (write)))",
"semodule -i local_fdo_update.cil",
"10:20:56,416 DDEBUG dnf: RPM transaction over.",
"dnf install libxkbcommon",
"grub2-mkstandalone may fail to make a bootable EFI image of GRUB2 (no /usr/*/grub*/x86_64-efi/moddep.lst file) (...) grub2-mkstandalone: error: /usr/lib/grub/x86_64-efi/modinfo.sh doesn't exist. Please specify --target or --directory.",
"UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi",
"<jemalloc>: Unsupported system page size",
"cargo install fd-find --force",
"CONFIG_TASK_DELAY_ACCT not enabled in kernel, cannot determine SWAPIN and IO%",
"echo 1 > /proc/sys/kernel/task_delayacct",
"kernel.task_delayacct = 1",
"grubby --update-kernel=ALL --args=\"skew_tick=1\"",
"cat /proc/cmdline",
"kdumpctl estimate",
"grubby --args=crashkernel=652M --update-kernel=ALL",
"reboot",
"dracut: Failed to set up znet kdump: mkdumprd: failed to make kdump initrd",
"nmcli connection modify enc600 master bond0 slave-type bond",
"nmcli connection show NAME UUID TYPE Device bridge-br0 ed391a43-bdea-4170-b8a2 bridge br0 bridge-slave-enc600 caf7f770-1e55-4126-a2f4 ethernet enc600 enc600 bc293b8d-ef1e-45f6-bad1 ethernet --",
"#!/bin/bash inactive_connection=enc600 active_connection=bridge-slave-enc600 for name in nettype subchannels options; do field=802-3-ethernet.s390-USDname val=USD(nmcli --get-values \"USDfield\"connection show \"USDinactive_connection\") nmcli connection modify \"USDactive_connection\" \"USDfield\" USDval\" done",
"kdumpctl restart",
"kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110",
"systemctl enable --now blk-availability.service",
"git config --global --add safe.directory /path/to/repository",
"update-crypto-policies --set DEFAULT:SHA1",
"update-crypto-policies --set DEFAULT:SHA1",
"[libdefaults] pkinit_dh_min_bits = 1759",
"ipa config-mod --enable-sid --add-sids",
"kadmin.local getprinc K/M | grep -E '^Key:'",
"update-crypto-policies --set FIPS:NO-ENFORCE-EMS",
"systemctl enable --now vncserver@: port-number",
"systemctl unmask firewalld.service",
"Spurious APIC interrupt (vector 0xFF) on CPU#2, should never happen.",
"sched: CPU #4's llc-sibling CPU #3 is not on the same node! [node: 1 != 0]. Ignoring dependency. WARNING: CPU: 4 PID: 0 at arch/x86/kernel/smpboot.c:415 topology_sane.isra.0+0x6b/0x80",
"echo bus > /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method cat /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method bus",
"error: Requested operation is not valid: QEMU reports migration is still running",
"PROCESSOR_START_TIMEOUT",
"lscpu | grep rstack Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode",
"ip link set dev eth0 down",
"cat /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow Mitigation: Safe RET",
"cat /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow Vulnerable: Safe RET, no microcode",
"Cannot open access to console, the root account is locked. See sulogin(8) man page for more details. Press Enter to continue.",
"UUID=e43ead51-b364-419e-92fc-b1f363f19e49 /boot xfs defaults, x-systemd.device-timeout=600,x-systemd.mount-timeout=600 0 0 UUID=7B77-95E7 /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt, x-systemd.device-timeout=600,x-systemd.mount-timeout=600 0 2",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd",
"/usr/bin/growpart /dev/vda 4 unshare -m bin/sh -c 'mount -o remount,rw /sysroot && xfs_growfs /sysroot'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/known-issues |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/providing-feedback-on-red-hat-documentation_hammer-cheat-sheet |
Chapter 1. About the Assisted Installer | Chapter 1. About the Assisted Installer The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure. The Assisted Installer also supports various CPU architectures, including x86_64, s390x (IBM Z(R)), arm64, and ppc64le (IBM Power(R)). You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms: Highly available OpenShift Container Platform or single-node OpenShift cluster OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation 1.1. Features The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features: Web interface You can install your cluster by using the Hybrid Cloud Console instead of creating installation configuration files manually. No bootstrap node You do not need a bootstrap node because the bootstrapping process runs on a node within the cluster. Streamlined installation workflow You do not need in-depth knowledge of OpenShift Container Platform to deploy a cluster. The Assisted Installer provides reasonable default configurations. You do not need to run the OpenShift Container Platform installer locally. You have access to the latest Assisted Installer for the latest tested z-stream releases. Advanced networking options The Assisted Installer supports IPv4 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy. OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later. SDN is supported up to OpenShift Container Platform 4.14. SDN supports IPv4 only. Preinstallation validation Before installing, the Assisted Installer checks the following configurations: Network connectivity Network bandwidth Connectivity to the registry Upstream DNS resolution of the domain name Time synchronization between cluster nodes Cluster node hardware Installation configuration parameters REST API You can automate the installation process by using the Assisted Installer REST API. 1.2. Customizing your installation by using Operators You can customize your deployment by selecting one or more Operators, either during the installation or afterward. Operators are used to package, deploy, and manage services and applications. This section presents the supported Assisted Installer Operators, together with their prerequisites and limitations. Important The additional requirements specified below apply to each Operator individually. If you select more than one Operator, or if the Assisted Installer automatically selects an Operator due to dependencies, the total required resources is the sum of the resource requirements for each Operator. For instructions on installing and modifying the Assisted Installer Operators, see the following sections: Installing Operators by using the web console . Installing Operators by using the API . Modifying Operators by using the API . 1.2.1. OpenShift Virtualization You can deploy OpenShift Virtualization to perform the following tasks: Create and manage Linux and Windows virtual machines (VMs). Run pod and VM workloads alongside each other in a cluster. Connect to VMs through a variety of consoles and CLI tools. Import and clone existing VMs. Manage network interface controllers and storage drives attached to VMs. Live migrate VMs between nodes. Prerequisites Requires enabled CPU virtualization support in the firmware on all nodes. Each worker node requires an additional 360 MiB of memory and 2 CPU cores. Each control plane node requires an additional 150 MiB of memory and 4 CPU cores. Requires Red Hat OpenShift Data Foundation (recommended for creating additional on-premise clusters), Logical Volume Manager Storage, or another persistent storage service. Important Deploying OpenShift Virtualization without Red Hat OpenShift Data Foundation results in the following scenarios: Multi-node cluster: No storage is configured. You must configure storage after the OpenShift Data Foundation configuration. Single-node OpenShift: Logical Volume Manager Storage (LVM Storage) is installed. You must review the prerequisites to ensure that your environment has sufficient additional resources for OpenShift Virtualization. Additional resources OpenShift Virtualization product overview . Getting started with OpenShift Virtualization . 1.2.2. Migration Toolkit for Virtualization When creating a new OpenShift cluster in the Assisted Installer, you can enable the Migration Toolkit for Virtualization (MTV) Operator. The Migration Toolkit for Virtualization Operator allows you to migrate virtual machines at scale to Red Hat OpenShift Virtualization from the following source providers: VMware vSphere Red Hat Virtualization (RHV) Red Hat OpenShift Virtualization OpenStack You can migrate to a local or a remote OpenShift Virtualization cluster. When you select the Migration Toolkit for Virtualization Operator, the Assisted Installer automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator. Prerequisites Requires OpenShift Container Platform version 4.14 or later. Requires an x86_64 CPU architecture. Requires an additional 1024 MiB of memory and 1 CPU core for each control plane node and worker node. Requires the additional resources specified for the OpenShift Virtualization Operator, installed together with OpenShift Virtualization. For details, see the prerequisites in the 'OpenShift Virtualization Operator' section. Post-installation steps After completing the installation, the Migration menu appears in the navigation pane of the Red Hat OpenShift web console. The Migration menu provides access to the Migration Toolkit for Virtualization. Use the toolkit to create and execute a migration plan with the relevant source and destination providers. For details, see either of the following chapters in the Migration Toolkit for Virtualization Guide: Migrating virtual machines by using the OpenShift Container Platform web console . Migrating virtual machines from the command line . 1.2.3. Multicluster engine for Kubernetes You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment: Provision and manage additional Kubernetes clusters from your initial cluster. Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes. Use GitOps Zero Touch Provisioning to manage remote edge sites at scale. You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters. Prerequisites Each worker node requires an additional 16384 MiB of memory and 4 CPU cores. Each control plane node requires an additional 16384 MiB of memory and 4 CPU cores. Requires OpenShift Data Foundation (recommended for creating additional on-premise clusters), LVM Storage, or another persistent storage service. Important Deploying multicluster engine without OpenShift Data Foundation results in the following scenarios: Multi-node cluster: No storage is configured. You must configure storage after the installation process. Single-node OpenShift: LVM Storage is installed. You must review the prerequisites to ensure that your environment has sufficient additional resources for the multicluster engine. Prerequisites About the multicluster engine Operator . Red Hat OpenShift Cluster Manager documentation 1.2.4. Logical Volume Manager Storage You can use LVM Storage to dynamically provision block storage on a limited resources cluster. Prerequisites Requires at least 1 non-boot drive per host. Requires 100 MiB of additional RAM. Requires 1 additional CPU core for each non-boot drive. Additional resources Persistent storage using Logical Volume Manager Storage . Logical Volume Manager Storage documentation 1.2.5. Red Hat OpenShift Data Foundation You can use OpenShift Data Foundation for file, block, and object storage. This storage option is recommended for all OpenShift Container Platform clusters. OpenShift Data Foundation requires a separate subscription. Prerequisites There are at least 3 compute (workers) nodes, each with 19 additional GiB of memory and 8 additional CPU cores. There are at least 2 drives per compute node. For each drive, there is an additional 5 GB of RAM. You comply to the additional requirements specified here: Planning your deployment . Additional resources OpenShift Data Foundation datasheet . OpenShift Data Foundation documentation . 1.2.6. OpenShift Artificial Intelligence (AI) Red Hat(R) OpenShift(R) Artificial Intelligence (AI) is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments. Red Hat(R) OpenShift(R) AI enables the following functionality: Data acquisition and preparation. Model training and fine-tuning. Model serving and model monitoring. Hardware acceleration. The OpenShift AI Operator enables you to install Red Hat(R) OpenShift(R) AI on your OpenShift Container Platform cluster. From OpenShift Container Platform version 4.17 and later, you can use the Assisted Installer to deploy the OpenShift AI Operator to your cluster during the installation. For the developer preview, installing the OpenShift AI Operator automatically installs the following Operators: Red Hat OpenShift Data Foundation (in this section) Node Feature Discovery Operator Nvidia GPU Operator OpenShift Container Platform Pipelines Operator OpenShift Container Platform Service Mesh Operator OpenShift Container Platform Serverless Operator Authorino (Kubernetes) Important The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Prerequisites You are installing OpenShift Container Platform version 4.17 or later. For the OpenShift AI Operator, you meet the following miminum requirements: There are at least 2 compute (worker) nodes, each with 32 additional GiB of memory and 8 additional CPU cores. There is at least 1 supported GPU. Currently only NVIDIA GPUs are supported. Nodes that have NVIDIA GPUs installed have Secure Boot disabled. For the dependent OpenShift Data Foundation Operator, you meet the minimum additional requirements specified for that Operator in this section. You meet the additional requirements specified here: Requirements for OpenShift AI . Additional resources Red Hat(R) OpenShift(R) AI 1.2.7. Additional resources Working with Operators in OpenShift Container Platform . Introduction to hosted control planes . Configure and deploy OpenShift Container Platform clusters at the network edge . 1.3. OpenShift Container Platform host architecture: control plane and compute nodes The OpenShift Container Platform architecture allows you to select a standard Kubernetes role for each of the discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or compute (worker) . 1.3.1. About assigning roles to hosts During the installation process, you can select a role for a host or configure the Assisted Installer to assign it for you. The options are as follows: Control plane (master) node - The control plane nodes run the services that are required to control the cluster, including the API server. The control plane schedules workloads, maintains cluster state, and ensures stability. Control plane nodes are also known as master nodes. Compute (worker) node - The compute nodes are responsible for executing workloads for cluster users. Compute nodes advertise their capacity, so that the control plane scheduler can identify suitable compute nodes for running pods and containers. Compute nodes are also known as worker nodes. Auto-assign - This option allows the Assisted Installer to automatically select a role for each of the hosts, based on detected hardware and network latency. You can change the role at any time before installation starts. To assign a role to a host, see either of the following sections: Configuring hosts (Web console), step 4 Assigning roles to hosts (Web console and API) 1.3.2. About specifying the number of control plane nodes for your cluster Using a higher number of control plane (master) nodes boosts fault tolerance and availability, minimizing downtime during failures. All versions of OpenShift Container Platform support one or three control plane nodes, where one control plane node is a Single-node OpenShift cluster. From OpenShift Container Platform version 4.18 and higher, the Assisted Installer also supports four or five control plane nodes on a bare metal or user-managed networking platform with an x86_64 architecture. An implementation can support any number of compute nodes. To define the required number of control plane nodes, see either of the following sections: Setting the cluster details (web console), step 12 Registering a new cluster (API), step 2 1.3.3. About scheduling workloads on control plane nodes Scheduling workloads to run on control plane nodes improves efficiency and maximizes resource utilization. You can enable this option during installation setup or as a postinstallation step. Use the following guidelines to determine when to use this feature: single-node OpenShift or small clusters (up to four nodes): The system schedules workloads on control plane nodes by default. This setting cannot be changed. Medium clusters (five to ten nodes): Scheduling workloads to run on control plane nodes in addition to worker nodes is the recommended configuration. Large clusters (more than ten nodes): Configuring control plane nodes as schedulable is not recommended. For instructions on configuring control plane nodes as schedulable during the installation setup, see the following sections: Adding hosts to the cluster (web console), step 2 . Scheduling workloads to run on control plane nodes (API) . For instructions on configuring schedulable control plane nodes following an installation, see Configuring control plane nodes as schedulable in the OpenShift Container Platform documentation. Important When you configure control plane nodes to be schedulable for workloads, an additional subscription is required for each control plane node that function as a compute (worker) node. 1.3.4. Role-related configuration validations The Assisted Installer monitors the number of hosts as one of the conditions for proceding through the cluster installation stages. The logic for determining when a cluster has a sufficient number of installed hosts to proceed is as follows: The number of control plane (master) nodes to install must match the number of control plane nodes that the user requests. For compute (worker) nodes, the requirement depends on the number of compute nodes that the user requests: If the user requests fewer than two compute nodes, the Assisted Installer accepts any number of installed compute nodes, because the control plane nodes remain schedulable for workloads. If the user requests two or more compute nodes, the Assisted Installer installs at least two compute nodes, ensuring that the control plane nodes are not schedulable for workloads. For details, see "About scheduling workloads on control plane nodes" in this section. This logic ensures that the cluster reaches a stable and expected state before continuing with the installation process. 1.3.5. Additional resources For detailed information on control plane and compute nodes, see OpenShift Container Platform architecture . 1.4. API support policy Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/about-ai |
4.23. Intel Modular | 4.23. Intel Modular Table 4.24, "Intel Modular" lists the fence device parameters used by fence_intelmodular , the fence agent for Intel Modular. Table 4.24. Intel Modular luci Field cluster.conf Attribute Description Name name A name for the Intel Modular device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.18, "Intel Modular" shows the configuration screen for adding an Intel Modular fence device. Figure 4.18. Intel Modular The following command creates a fence device instance for an Intel Modular device: The following is the cluster.conf entry for the fence_intelmodular device: | [
"ccs -f cluster.conf --addfencedev intelmodular1 agent=fence_intelmodular community=private ipaddr=192.168.0.1 login=root passwd=password123 snmp_priv_passwd=snmpasswd123 power_wait=60 udpport=161",
"<fencedevices> <fencedevice agent=\"fence_intelmodular\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"intelmodular1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"snmpasswd123\" udpport=\"161\"/> </fencedevices>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-intelmodular-CA |
Chapter 6. Configuring Your Maven Repositories | Chapter 6. Configuring Your Maven Repositories 6.1. About The Provided Maven Repositories A set of repositories containing artifacts required to build applications is provided with this release. Maven must be configured to use these repositories and the Maven Central Repository in order to provide correct build functionality. Two interchangeable sets of repositories ensuring the same functionality are provided. The first set is available for download and is stored in a local file system. The second set is hosted online for use as remote repositories. If you provided the location of Maven's settings.xml file during installation, Maven is already configured to use the online repositories. Important Maven repositories are also subject to patching. After a patch is released, it is applied to the remote repositories. Both original and patched artifacts reside there, only the versions of artifacts are incremented. It is user's responsibility to pick the new version of the patched artifact in their dependency management. For more information see https://access.redhat.com/site/maven-repository . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/chap-maven_repositories |
Chapter 13. Monitoring bare-metal events with the Bare Metal Event Relay | Chapter 13. Monitoring bare-metal events with the Bare Metal Event Relay Important Bare Metal Event Relay is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 13.1. About bare-metal events Use the Bare Metal Event Relay to subscribe applications that run in your OpenShift Container Platform cluster to events that are generated on the underlying bare-metal host. The Redfish service publishes events on a node and transmits them on an advanced message queue to subscribed applications. Bare-metal events are based on the open Redfish standard that is developed under the guidance of the Distributed Management Task Force (DMTF). Redfish provides a secure industry-standard protocol with a REST API. The protocol is used for the management of distributed, converged or software-defined resources and infrastructure. Hardware-related events published through Redfish includes: Breaches of temperature limits Server status Fan status Begin using bare-metal events by deploying the Bare Metal Event Relay Operator and subscribing your application to the service. The Bare Metal Event Relay Operator installs and manages the lifecycle of the Redfish bare-metal event service. Note The Bare Metal Event Relay works only with Redfish-capable devices on single-node clusters provisioned on bare-metal infrastructure. 13.2. How bare-metal events work The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. These hardware events are delivered over a reliable low-latency transport channel based on Advanced Message Queuing Protocol (AMQP). The latency of the messaging service is between 10 to 20 milliseconds. The Bare Metal Event Relay provides a publish-subscribe service for the hardware events, where multiple applications can use REST APIs to subscribe and consume the events. The Bare Metal Event Relay supports hardware that complies with Redfish OpenAPI v1.8 or higher. 13.2.1. Bare Metal Event Relay data flow The following figure illustrates an example of bare-metal events data flow: Figure 13.1. Bare Metal Event Relay data flow 13.2.1.1. Operator-managed pod The Operator uses custom resources to manage the pod containing the Bare Metal Event Relay and its components using the HardwareEvent CR. 13.2.1.2. Bare Metal Event Relay At startup, the Bare Metal Event Relay queries the Redfish API and downloads all the message registries, including custom registries. The Bare Metal Event Relay then begins to receive subscribed events from the Redfish hardware. The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. The events are reported using the HardwareEvent CR. 13.2.1.3. Cloud native event Cloud native events (CNE) is a REST API specification for defining the format of event data. 13.2.1.4. CNCF CloudEvents CloudEvents is a vendor-neutral specification developed by the Cloud Native Computing Foundation (CNCF) for defining the format of event data. 13.2.1.5. AMQP dispatch router The dispatch router is responsible for the message delivery service between publisher and subscriber. AMQP 1.0 qpid is an open standard that supports reliable, high-performance, fully-symmetrical messaging over the internet. 13.2.1.6. Cloud event proxy sidecar The cloud event proxy sidecar container image is based on the ORAN API specification and provides a publish-subscribe event framework for hardware events. 13.2.2. Redfish message parsing service In addition to handling Redfish events, the Bare Metal Event Relay provides message parsing for events without a Message property. The proxy downloads all the Redfish message registries including vendor specific registries from the hardware when it starts. If an event does not contain a Message property, the proxy uses the Redfish message registries to construct the Message and Resolution properties and add them to the event before passing the event to the cloud events framework. This service allows Redfish events to have smaller message size and lower transmission latency. 13.2.3. Installing the Bare Metal Event Relay using the CLI As a cluster administrator, you can install the Bare Metal Event Relay Operator by using the CLI. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f bare-metal-events-namespace.yaml Create an Operator group for the Bare Metal Event Relay Operator. Save the following YAML in the bare-metal-events-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events Create the OperatorGroup CR: USD oc create -f bare-metal-events-operatorgroup.yaml Subscribe to the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: "stable" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f bare-metal-events-sub.yaml Verification To verify that the Bare Metal Event Relay Operator is installed, run the following command: USD oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase bare-metal-event-relay.4.11.0-xxxxxxxxxxxx Succeeded 13.2.4. Installing the Bare Metal Event Relay using the web console As a cluster administrator, you can install the Bare Metal Event Relay Operator using the web console. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Log in as a user with cluster-admin privileges. Procedure Install the Bare Metal Event Relay using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Bare Metal Event Relay from the list of available Operators, and then click Install . On the Install Operator page, select or create a Namespace , select openshift-bare-metal-events , and then click Install . Verification Optional: You can verify that the Operator installed successfully by performing the following check: Switch to the Operators Installed Operators page. Ensure that Bare Metal Event Relay is listed in the project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the project namespace. 13.3. Installing the AMQ messaging bus To pass Redfish bare-metal event notifications between publisher and subscriber on a node, you must install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Installing the AMQ Interconnect Operator . Verification Verify that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Verify that the required bare-metal-event-relay bare-metal event producer pod is running in the openshift-bare-metal-events namespace: USD oc get pods -n openshift-bare-metal-events Example output NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s 13.4. Subscribing to Redfish BMC bare-metal events for a cluster node As a cluster administrator, you can subscribe to Redfish BMC events generated on a node in your cluster by creating a BMCEventSubscription custom resource (CR) for the node, creating a HardwareEvent CR for the event, and a Secret CR for the BMC. 13.4.1. Subscribing to bare-metal events You can configure the baseboard management controller (BMC) to send bare-metal events to subscribed applications running in an OpenShift Container Platform cluster. Example Redfish bare-metal events include an increase in device temperature, or removal of a device. You subscribe applications to bare-metal events using a REST API. Important You can only create a BMCEventSubscription custom resource (CR) for physical hardware that supports Redfish and has a vendor interface set to redfish or idrac-redfish . Note Use the BMCEventSubscription CR to subscribe to predefined Redfish events. The Redfish standard does not provide an option to create specific alerts and thresholds. For example, to receive an alert event when an enclosure's temperature exceeds 40deg Celsius, you must manually configure the event according to the vendor's recommendations. Perform the following procedure to subscribe to bare-metal events for the node using a BMCEventSubscription CR. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish events on the BMC. Note Enabling Redfish events on specific hardware is outside the scope of this information. For more information about enabling Redfish events for your specific hardware, consult the BMC manufacturer documentation. Procedure Confirm that the node hardware has the Redfish EventService enabled by running the following curl command: curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u "<bmc_username>:<password>" where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output { "@odata.context": "/redfish/v1/USDmetadata#EventService.EventService", "@odata.id": "/redfish/v1/EventService", "@odata.type": "#EventService.v1_0_2.EventService", "Actions": { "#EventService.SubmitTestEvent": { "[email protected]": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "target": "/redfish/v1/EventService/Actions/EventService.SubmitTestEvent" } }, "DeliveryRetryAttempts": 3, "DeliveryRetryIntervalSeconds": 30, "Description": "Event Service represents the properties for the service", "EventTypesForSubscription": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "[email protected]": 5, "Id": "EventService", "Name": "Event Service", "ServiceEnabled": true, "Status": { "Health": "OK", "HealthRollup": "OK", "State": "Enabled" }, "Subscriptions": { "@odata.id": "/redfish/v1/EventService/Subscriptions" } } Get the Bare Metal Event Relay service route for the cluster by running the following command: USD oc get route -n openshift-bare-metal-events Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None Create a BMCEventSubscription resource to subscribe to the Redfish events: Save the following YAML in the bmc_sub.yaml file: apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: '' 1 Specifies the name or UUID of the worker node where the Redfish events are generated. 2 Specifies the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . Create the BMCEventSubscription CR: USD oc create -f bmc_sub.yaml Optional: To delete the BMC event subscription, run the following command: USD oc delete -f bmc_sub.yaml Optional: To manually create a Redfish event subscription without creating a BMCEventSubscription CR, run the following curl command, specifying the BMC username and password. USD curl -i -k -X POST -H "Content-Type: application/json" -d '{"Destination": "https://<proxy_service_url>", "Protocol" : "Redfish", "EventTypes": ["Alert"], "Context": "root"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v where: proxy_service_url is the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: "1651135676" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT 13.4.2. Querying Redfish bare-metal event subscriptions with curl Some hardware vendors limit the amount of Redfish hardware event subscriptions. You can query the number of Redfish event subscriptions by using curl . Prerequisites Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish hardware events on the BMC. Procedure Check the current subscriptions for the BMC by running the following curl command: USD curl --globoff -H "Content-Type: application/json" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { "@odata.context": "/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection", "@odata.etag": "" 1651137375 "", "@odata.id": "/redfish/v1/EventService/Subscriptions", "@odata.type": "#EventDestinationCollection.EventDestinationCollection", "Description": "Collection for Event Subscriptions", "Members": [ { "@odata.id": "/redfish/v1/EventService/Subscriptions/1" }], "[email protected]": 1, "Name": "Event Subscriptions Collection" } In this example, a single subscription is configured: /redfish/v1/EventService/Subscriptions/1 . Optional: To remove the /redfish/v1/EventService/Subscriptions/1 subscription with curl , run the following command, specifying the BMC username and password: USD curl --globoff -L -w "%{http_code} %{url_effective}\n" -k -u <bmc_username>:<password >-H "Content-Type: application/json" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1 where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. 13.4.3. Creating the bare-metal event and Secret CRs To start using bare-metal events, create the HardwareEvent custom resource (CR) for the host where the Redfish hardware is present. Hardware events and faults are reported in the hw-event-proxy logs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Bare Metal Event Relay. Create a BMCEventSubscription CR for the BMC Redfish hardware. Note Multiple HardwareEvent resources are not permitted. Procedure Create the HardwareEvent custom resource (CR): Save the following YAML in the hw-event.yaml file: apiVersion: "event.redhat-cne.org/v1alpha1" kind: "HardwareEvent" metadata: name: "hardware-event" spec: nodeSelector: node-role.kubernetes.io/hw-event: "" 1 transportHost: "amqp://amq-router-service-name.amq-namespace.svc.cluster.local" 2 logLevel: "debug" 3 msgParserTimeout: "10" 4 1 Required. Use the nodeSelector field to target nodes with the specified label, for example, node-role.kubernetes.io/hw-event: "" . 2 Required. AMQP host that delivers the events at the transport layer using the AMQP protocol. 3 Optional. The default value is debug . Sets the log level in hw-event-proxy logs. The following log levels are available: fatal , error , warning , info , debug , trace . 4 Optional. Sets the timeout value in milliseconds for the Message Parser. If a message parsing request is not responded to within the timeout duration, the original hardware event message is passed to the cloud native event framework. The default value is 10. Create the HardwareEvent CR: USD oc create -f hardware-event.yaml Create a BMC username and password Secret CR that enables the hardware events proxy to access the Redfish message registry for the bare-metal host. Save the following YAML in the hw-event-bmc-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address> 1 Enter plain text values for the various items under stringData . Create the Secret CR: USD oc create -f hw-event-bmc-secret.yaml 13.5. Subscribing applications to bare-metal events REST API reference Use the bare-metal events REST API to subscribe an application to the bare-metal events that are generated on the parent node. Subscribe applications to Redfish events by using the resource address /cluster/node/<node_name>/redfish/event , where <node_name> is the cluster node running the application. Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod. The cloud-event-consumer application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer application to Redfish events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod: /api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions /api/ocloudNotifications/v1/subscriptions/<subscription_id> PUT : Creates a new status ping request for the specified subscription ID /api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri": "http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 13.1. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/subscriptions/<subscription_id> HTTP method GET api/ocloudNotifications/v1/subscriptions/<subscription_id> Description Returns details for the subscription with ID <subscription_id> Table 13.2. Query parameters Parameter Type <subscription_id> string Example API response { "id":"ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri":"http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource":"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/health/ HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"",
"oc create -f bare-metal-events-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events",
"oc create -f bare-metal-events-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f bare-metal-events-sub.yaml",
"oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase bare-metal-event-relay.4.11.0-xxxxxxxxxxxx Succeeded",
"oc get pods -n amq-interconnect",
"NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h",
"oc get pods -n openshift-bare-metal-events",
"NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s",
"curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"",
"{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }",
"oc get route -n openshift-bare-metal-events",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None",
"apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''",
"oc create -f bmc_sub.yaml",
"oc delete -f bmc_sub.yaml",
"curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v",
"HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT",
"curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1",
"apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 transportHost: \"amqp://amq-router-service-name.amq-namespace.svc.cluster.local\" 2 logLevel: \"debug\" 3 msgParserTimeout: \"10\" 4",
"oc create -f hardware-event.yaml",
"apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>",
"oc create -f hw-event-bmc-secret.yaml",
"[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]",
"{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }",
"{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }",
"OK"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/using-rfhe |
Migrating to Red Hat build of Apache Camel for Spring Boot | Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.8 Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.2/making-open-source-more-inclusive |
Chapter 4. Configuring the all-in-one Red Hat OpenStack Platform environment | Chapter 4. Configuring the all-in-one Red Hat OpenStack Platform environment You must create the following configuration files manually before you can deploy the all-in-one RHOSP environment: USDHOME/containers-prepare-parameters.yaml USDHOME/standalone_parameters.yaml If you want to customize the all-in-one environment for development or testing, edit the following configuration files: /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml 4.1. Generating YAML files for the all-in-one Red Hat OpenStack Platform (RHOSP) environment To generate the containers-prepare-parameters.yaml and standalone_parameters.yaml files, complete the following steps: Generate the containers-prepare-parameters.yaml file that contains the default ContainerImagePrepare parameters: Edit the containers-prepare-parameters.yaml file and include your Red Hat credentials in the ContainerImageRegistryCredentials parameter so that the deployment process can authenticate with registry.redhat.io and pull container images successfully: Note To avoid entering your password in plain text, create a Red Hat Service Account. For more information, see Red Hat Container Registry Authentication : Set the ContainerImageRegistryLogin parameter to true in the containers-prepare-parameters.yaml : If you want to use the all-in-one host as the container registry, omit this parameter and include --local-push-destination in the openstack tripleo container image prepare command. For more information, see Preparing container images . Create the USDHOME/standalone_parameters.yaml file and configure basic parameters for your all-in-one RHOSP environment, including network configuration and some deployment options. In this example, network interface eth1 is the interface on the management network that you use to deploy RHOSP. eth1 has the IP address 192.168.25.2: You must configure the DnsServers parameter with your DNS address. You can find this address in the /etc/resolv.conf file: If you use only a single network interface, you must define the default route: If you have an internal time source, or if your environment blocks access to external time sources, use the NtpServer parameter to define the time source that you want to use: If you want to use the all-in-one RHOSP installation in a virtual environment, you must define the virtualization type with the NovaComputeLibvirtType parameter: The Load-balancing service (octavia) does not require that you configure SSH. However, if you want SSH access to the load-balancing instances (amphorae), add the OctaviaAmphoraSshKeyFile parameter with a value of the absolute path to your public key file for the stack user: OctaviaAmphoraSshKeyFile: "/home/stack/.ssh/id_rsa.pub" | [
"[stack@all-in-one]USD sudo openstack tripleo container image prepare default --output-env-file USDHOME/containers-prepare-parameters.yaml",
"parameter_defaults: ContainerImagePrepare: ContainerImageRegistryCredentials: registry.redhat.io: <USERNAME>: \"<PASSWORD>\"",
"parameter_defaults: ContainerImagePrepare: ContainerImageRegistryCredentials: registry.redhat.io: <USERNAME>: \"<PASSWORD>\" ContainerImageRegistryLogin: true",
"[stack@all-in-one]USD export IP=192.168.25.2 [stack@all-in-one]USD export NETMASK=24 [stack@all-in-one]USD export INTERFACE=eth1 [stack@all-in-one]USD export DNS1=1.1.1.1 [stack@all-in-one]USD export DNS2=8.8.8.8 [stack@all-in-one]USD cat <<EOF > USDHOME/standalone_parameters.yaml parameter_defaults: CloudName: USDIP CloudDomain: <DOMAIN_NAME> ControlPlaneStaticRoutes: [] Debug: true DeploymentUser: USDUSER DnsServers: - USDDNS1 - USDDNS2 NeutronPublicInterface: USDINTERFACE NeutronDnsDomain: localdomain NeutronBridgeMappings: datacentre:br-ctlplane NeutronPhysicalBridge: br-ctlplane StandaloneEnableRoutedNetworks: false StandaloneHomeDir: USDHOME StandaloneLocalMtu: 1500 EOF",
"[stack@all-in-one]USD cat /etc/resolv.conf 192.168.122.1",
"ControlPlaneStaticRoutes: - ip_netmask: 0.0.0.0/0 next_hop: USDGATEWAY default: true",
"parameter_defaults: NtpServer: clock.example.com",
"parameter_defaults: NovaComputeLibvirtType: qemu"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/standalone_deployment_guide/configuring-the-all-in-one-openstack-installation |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.2/making-open-source-more-inclusive |
7.6. Understanding Audit Log Files | 7.6. Understanding Audit Log Files By default, the Audit system stores log entries in the /var/log/audit/audit.log file; if log rotation is enabled, rotated audit.log files are stored in the same directory. The following Audit rule logs every attempt to read or modify the /etc/ssh/sshd_config file: If the auditd daemon is running, for example, using the following command creates a new event in the Audit log file: This event in the audit.log file looks as follows: The above event consists of four records, which share the same time stamp and serial number. Records always start with the type= keyword. Each record consists of several name = value pairs separated by a white space or a comma. A detailed analysis of the above event follows: First Record type=SYSCALL The type field contains the type of the record. In this example, the SYSCALL value specifies that this record was triggered by a system call to the kernel. For a list of all possible type values and their explanations, see Audit Record Types . msg=audit(1364481363.243:24287): The msg field records: a time stamp and a unique ID of the record in the form audit( time_stamp : ID ) . Multiple records can share the same time stamp and ID if they were generated as part of the same Audit event. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. various event-specific name = value pairs provided by the kernel or user space applications. arch=c000003e The arch field contains information about the CPU architecture of the system. The value, c000003e , is encoded in hexadecimal notation. When searching Audit records with the ausearch command, use the -i or --interpret option to automatically convert hexadecimal values into their human-readable equivalents. The c000003e value is interpreted as x86_64 . syscall=2 The syscall field records the type of the system call that was sent to the kernel. The value, 2 , can be matched with its human-readable equivalent in the /usr/include/asm/unistd_64.h file. In this case, 2 is the open system call. Note that the ausyscall utility allows you to convert system call numbers to their human-readable equivalents. Use the ausyscall --dump command to display a listing of all system calls along with their numbers. For more information, see the ausyscall (8) man page. success=no The success field records whether the system call recorded in that particular event succeeded or failed. In this case, the call did not succeed. exit=-13 The exit field contains a value that specifies the exit code returned by the system call. This value varies for different system call. You can interpret the value to its human-readable equivalent with the following command: Note that the example assumes that your Audit log contains an event that failed with exit code -13 . a0=7fffd19c5592 , a1=0 , a2=7fffd19c5592 , a3=a The a0 to a3 fields record the first four arguments, encoded in hexadecimal notation, of the system call in this event. These arguments depend on the system call that is used; they can be interpreted by the ausearch utility. items=1 The items field contains the number of PATH auxiliary records that follow the syscall record. ppid=2686 The ppid field records the Parent Process ID (PPID). In this case, 2686 was the PPID of the parent process such as bash . pid=3538 The pid field records the Process ID (PID). In this case, 3538 was the PID of the cat process. auid=1000 The auid field records the Audit user ID, that is the loginuid. This ID is assigned to a user upon login and is inherited by every process even when the user's identity changes, for example, by switching user accounts with the su - john command. uid=1000 The uid field records the user ID of the user who started the analyzed process. The user ID can be interpreted into user names with the following command: ausearch -i --uid UID . gid=1000 The gid field records the group ID of the user who started the analyzed process. euid=1000 The euid field records the effective user ID of the user who started the analyzed process. suid=1000 The suid field records the set user ID of the user who started the analyzed process. fsuid=1000 The fsuid field records the file system user ID of the user who started the analyzed process. egid=1000 The egid field records the effective group ID of the user who started the analyzed process. sgid=1000 The sgid field records the set group ID of the user who started the analyzed process. fsgid=1000 The fsgid field records the file system group ID of the user who started the analyzed process. tty=pts0 The tty field records the terminal from which the analyzed process was invoked. ses=1 The ses field records the session ID of the session from which the analyzed process was invoked. comm="cat" The comm field records the command-line name of the command that was used to invoke the analyzed process. In this case, the cat command was used to trigger this Audit event. exe="/bin/cat" The exe field records the path to the executable that was used to invoke the analyzed process. subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 The subj field records the SELinux context with which the analyzed process was labeled at the time of execution. key="sshd_config" The key field records the administrator-defined string associated with the rule that generated this event in the Audit log. Second Record type=CWD In the second record, the type field value is CWD - current working directory. This type is used to record the working directory from which the process that invoked the system call specified in the first record was executed. The purpose of this record is to record the current process's location in case a relative path winds up being captured in the associated PATH record. This way the absolute path can be reconstructed. msg=audit(1364481363.243:24287) The msg field holds the same time stamp and ID value as the value in the first record. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. cwd="/home/ user_name " The cwd field contains the path to the directory in which the system call was invoked. Third Record type=PATH In the third record, the type field value is PATH . An Audit event contains a PATH -type record for every path that is passed to the system call as an argument. In this Audit event, only one path ( /etc/ssh/sshd_config ) was used as an argument. msg=audit(1364481363.243:24287): The msg field holds the same time stamp and ID value as the value in the first and second record. item=0 The item field indicates which item, of the total number of items referenced in the SYSCALL type record, the current record is. This number is zero-based; a value of 0 means it is the first item. name="/etc/ssh/sshd_config" The name field records the path of the file or directory that was passed to the system call as an argument. In this case, it was the /etc/ssh/sshd_config file. inode=409248 The inode field contains the inode number associated with the file or directory recorded in this event. The following command displays the file or directory that is associated with the 409248 inode number: dev=fd:00 The dev field specifies the minor and major ID of the device that contains the file or directory recorded in this event. In this case, the value represents the /dev/fd/0 device. mode=0100600 The mode field records the file or directory permissions, encoded in numerical notation as returned by the stat command in the st_mode field. See the stat(2) man page for more information. In this case, 0100600 can be interpreted as -rw------- , meaning that only the root user has read and write permissions to the /etc/ssh/sshd_config file. ouid=0 The ouid field records the object owner's user ID. ogid=0 The ogid field records the object owner's group ID. rdev=00:00 The rdev field contains a recorded device identifier for special files only. In this case, it is not used as the recorded file is a regular file. obj=system_u:object_r:etc_t:s0 The obj field records the SELinux context with which the recorded file or directory was labeled at the time of execution. objtype=NORMAL The objtype field records the intent of each path record's operation in the context of a given syscall. cap_fp=none The cap_fp field records data related to the setting of a permitted file system-based capability of the file or directory object. cap_fi=none The cap_fi field records data related to the setting of an inherited file system-based capability of the file or directory object. cap_fe=0 The cap_fe field records the setting of the effective bit of the file system-based capability of the file or directory object. cap_fver=0 The cap_fver field records the version of the file system-based capability of the file or directory object. Fourth Record type=PROCTITLE The type field contains the type of the record. In this example, the PROCTITLE value specifies that this record gives the full command-line that triggered this Audit event, triggered by a system call to the kernel. proctitle=636174002F6574632F7373682F737368645F636F6E666967 The proctitle field records the full command-line of the command that was used to invoke the analyzed process. The field is encoded in hexadecimal notation to not allow the user to influence the Audit log parser. The text decodes to the command that triggered this Audit event. When searching Audit records with the ausearch command, use the -i or --interpret option to automatically convert hexadecimal values into their human-readable equivalents. The 636174002F6574632F7373682F737368645F636F6E666967 value is interpreted as cat /etc/ssh/sshd_config . The Audit event analyzed above contains only a subset of all possible fields that an event can contain. For a list of all event fields and their explanation, see Audit Event Fields . For a list of all event types and their explanation, see Audit Record Types . Example 7.6. Additional audit.log Events The following Audit event records a successful start of the auditd daemon. The ver field shows the version of the Audit daemon that was started. The following Audit event records a failed attempt of user with UID of 1000 to log in as the root user. | [
"-w /etc/ssh/sshd_config -p warx -k sshd_config",
"~]USD cat /etc/ssh/sshd_config",
"type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm=\"cat\" exe=\"/bin/cat\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"sshd_config\" type=CWD msg=audit(1364481363.243:24287): cwd=\"/home/shadowman\" type=PATH msg=audit(1364481363.243:24287): item=0 name=\"/etc/ssh/sshd_config\" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 objtype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967",
"~]# ausearch --interpret --exit -13",
"~]# find / -inum 409248 -print /etc/ssh/sshd_config",
"type=DAEMON_START msg=audit(1363713609.192:5426): auditd start, ver=2.2 format=raw kernel=2.6.32-358.2.1.el6.x86_64 auid=1000 pid=4979 subj=unconfined_u:system_r:auditd_t:s0 res=success",
"type=USER_AUTH msg=audit(1364475353.159:24270): user pid=3280 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct=\"root\" exe=\"/bin/su\" hostname=? addr=? terminal=pts/0 res=failed'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-understanding_audit_log_files |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_code_tutorials/making-open-source-more-inclusive_datagrid |
7.72. gnome-terminal | 7.72. gnome-terminal 7.72.1. RHBA-2012:1311 - gnome-terminal bug fix update Updated gnome-terminal packages that fix one bug are now available for Red Hat Enterprise Linux 6. Gnome-terminal is a terminal emulator for GNOME. It supports translucent backgrounds, opening multiple terminals in a single window (tabs) and clickable URLs. Bug Fix BZ#819796 Prior to this update, gnome-terminal was not completely localized into Asamese. With this update, the Assamese locale has been updated. All gnome-terminal users are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gnome-terminal |
Chapter 51. Entity Support | Chapter 51. Entity Support Abstract The Apache CXF runtime supports a limited number of mappings between MIME types and Java objects out of the box. Developers can extend the mappings by implementing custom readers and writers. The custom readers and writers are registered with the runtime at start-up. Overview The runtime relies on JAX-RS MessageBodyReader and MessageBodyWriter implementations to serialize and de-serialize data between the HTTP messages and their Java representations. The readers and writers can restrict the MIME types they are capable of processing. The runtime provides readers and writers for a number of common mappings. If an application requires more advanced mappings, a developer can provide custom implementations of the MessageBodyReader interface and/or the MessageBodyWriter interface. Custom readers and writers are registered with the runtime when the application is started. Natively supported types Table 51.1, "Natively supported entity mappings" lists the entity mappings provided by Apache CXF out of the box. Table 51.1. Natively supported entity mappings Java Type MIME Type primitive types text/plain java.lang.Number text/plain byte[] */* java.lang.String */* java.io.InputStream */* java.io.Reader */* java.io.File */* javax.activation.DataSource */* javax.xml.transform.Source text/xml , application/xml , application/\*+xml javax.xml.bind.JAXBElement text/xml , application/xml , application/\*+xml JAXB annotated objects text/xml , application/xml , application/\*+xml javax.ws.rs.core.MultivaluedMap<String, String> application/x-www-form-urlencoded [a] javax.ws.rs.core.StreamingOutput */* [b] [a] This mapping is used for handling HTML form data. [b] This mapping is only supported for returning data to a consumer. Custom readers Custom entity readers are responsible for mapping incoming HTTP requests into a Java type that a service's implementation can manipulate. They implement the javax.ws.rs.ext.MessageBodyReader interface. The interface, shown in Example 51.1, "Message reader interface" , has two methods that need implementing: Example 51.1. Message reader interface isReadable() The isReadable() method determines if the reader is capable of reading the data stream and creating the proper type of entity representation. If the reader can create the proper type of entity the method returns true . Table 51.2, "Parameters used to determine if a reader can produce an entity" describes the isReadable() method's parameters. Table 51.2. Parameters used to determine if a reader can produce an entity Parameter Type Description type Class<T> Specifies the actual Java class of the object used to store the entity. genericType Type Specifies the Java type of the object used to store the entity. For example, if the message body is to be converted into a method parameter, the value will be the type of the method parameter as returned by the Method.getGenericParameterTypes() method. annotations Annotation[] Specifies the list of annotations on the declaration of the object created to store the entity. For example if the message body is to be converted into a method parameter, this will be the annotations on that parameter returned by the Method.getParameterAnnotations() method. mediaType MediatType Specifies the MIME type of the HTTP entity. readFrom() The readFrom() method reads the HTTP entity and coverts it into the desired Java object. If the reading is successful the method returns the created Java object containing the entity. If an error occurs when reading the input stream the method should throw an IOException exception. If an error occurs that requires an HTTP error response, an WebApplicationException with the HTTP response should be thrown. Table 51.3, "Parameters used to read an entity" describes the readFrom() method's parameters. Table 51.3. Parameters used to read an entity Parameter Type Description type Class<T> Specifies the actual Java class of the object used to store the entity. genericType Type Specifies the Java type of the object used to store the entity. For example, if the message body is to be converted into a method parameter, the value will be the type of the method parameter as returned by the Method.getGenericParameterTypes() method. annotations Annotation[] Specifies the list of annotations on the declaration of the object created to store the entity. For example if the message body is to be converted into a method parameter, this will be the annotations on that parameter returned by the Method.getParameterAnnotations() method. mediaType MediatType Specifies the MIME type of the HTTP entity. httpHeaders MultivaluedMap<String, String> Specifies the HTTP message headers associated with the entity. entityStream InputStream Specifies the input stream containing the HTTP entity. Important This method should not close the input stream. Before an MessageBodyReader implementation can be used as an entity reader, it must be decorated with the javax.ws.rs.ext.Provider annotation. The @Provider annotation alerts the runtime that the supplied implementation provides additional functionality. The implementation must also be registered with the runtime as described in the section called "Registering readers and writers" . By default a custom entity provider handles all MIME types. You can limit the MIME types that a custom entity reader will handle using the javax.ws.rs.Consumes annotation. The @Consumes annotation specifies a comma separated list of MIME types that the custom entity provider reads. If an entity is not of a specified MIME type, the entity provider will not be selected as a possible reader. Example 51.2, "XML source entity reader" shows an entity reader the consumes XML entities and stores them in a Source object. Example 51.2. XML source entity reader Custom writers Custom entity writers are responsible for mapping Java types into HTTP entities. They implement the javax.ws.rs.ext.MessageBodyWriter interface. The interface, shown in Example 51.3, "Message writer interface" , has three methods that need implementing: Example 51.3. Message writer interface isWriteable() The isWriteable() method determines if the entity writer can map the Java type to the proper entity type. If the writer can do the mapping, the method returns true . Table 51.4, "Parameters used to read an entity" describes the isWritable() method's parameters. Table 51.4. Parameters used to read an entity Parameter Type Description type Class<T> Specifies the Java class of the object being written. genericType Type Specifies the Java type of object to be written, obtained either by reflection of a resource method return type or via inspection of the returned instance. The GenericEntity class, described in Section 48.4, "Returning entities with generic type information" , provides support for controlling this value. annotations Annotation[] Specifies the list of annotations on the method returning the entity. mediaType MediatType Specifies the MIME type of the HTTP entity. getSize() The getSize() method is called before the writeTo() . It returns the length, in bytes, of the entity being written. If a positive value is returned the value is written into the HTTP message's Content-Length header. Table 51.5, "Parameters used to read an entity" describes the getSize() method's parameters. Table 51.5. Parameters used to read an entity Parameter Type Description t generic Specifies the instance being written. type Class<T> Specifies the Java class of the object being written. genericType Type Specifies the Java type of object to be written, obtained either by reflection of a resource method return type or via inspection of the returned instance. The GenericEntity class, described in Section 48.4, "Returning entities with generic type information" , provides support for controlling this value. annotations Annotation[] Specifies the list of annotations on the method returning the entity. mediaType MediatType Specifies the MIME type of the HTTP entity. writeTo() The writeTo() method converts a Java object into the desired entity type and writes the entity to the output stream. If an error occurs when writing the entity to the output stream the method should throw an IOException exception. If an error occurs that requires an HTTP error response, an WebApplicationException with the HTTP response should be thrown. Table 51.6, "Parameters used to read an entity" describes the writeTo() method's parameters. Table 51.6. Parameters used to read an entity Parameter Type Description t generic Specifies the instance being written. type Class<T> Specifies the Java class of the object being written. genericType Type Specifies the Java type of object to be written, obtained either by reflection of a resource method return type or via inspection of the returned instance. The GenericEntity class, described in Section 48.4, "Returning entities with generic type information" , provides support for controlling this value. annotations Annotation[] Specifies the list of annotations on the method returning the entity. mediaType MediatType Specifies the MIME type of the HTTP entity. httpHeaders MultivaluedMap<String, Object> Specifies the HTTP response headers associated with the entity. entityStream OutputStream Specifies the output stream into which the entity is written. Before a MessageBodyWriter implementation can be used as an entity writer, it must be decorated with the javax.ws.rs.ext.Provider annotation. The @Provider annotation alerts the runtime that the supplied implementation provides additional functionality. The implementation must also be registered with the runtime as described in the section called "Registering readers and writers" . By default a custom entity provider handles all MIME types. You can limit the MIME types that a custom entity writer will handle using the javax.ws.rs.Produces annotation. The @Produces annotation specifies a comma separated list of MIME types that the custom entity provider generates. If an entity is not of a specified MIME type, the entity provider will not be selected as a possible writer. Example 51.4, "XML source entity writer" shows an entity writer that takes Source objects and produces XML entities. Example 51.4. XML source entity writer Registering readers and writers Before a JAX-RS application can use any custom entity providers, the custom providers must be registered with the runtime. Providers are registered with the runtime using either the jaxrs:providers element in the application's configuration file or using the JAXRSServerFactoryBean class. The jaxrs:providers element is a child of the jaxrs:server element and contains a list of bean elements. Each bean element defines one entity provider. Example 51.5, "Registering entity providers with the runtime" show a JAX-RS server configured to use a set of custom entity providers. Example 51.5. Registering entity providers with the runtime The JAXRSServerFactoryBean class is a Apache CXF extension that provides access to the configuration APIs. It has a setProvider() method that allows you to add instantiated entity providers to an application. Example 51.6, "Programmatically registering an entity provider" shows code for registering an entity provider programmatically. Example 51.6. Programmatically registering an entity provider | [
"package javax.ws.rs.ext; public interface MessageBodyReader<T> { public boolean isReadable(java.lang.Class<?> type, java.lang.reflect.Type genericType, java.lang.annotation.Annotation[] annotations, javax.ws.rs.core.MediaType mediaType); public T readFrom(java.lang.Class<T> type, java.lang.reflect.Type genericType, java.lang.annotation.Annotation[] annotations, javax.ws.rs.core.MediaType mediaType, javax.ws.rs.core.MultivaluedMap<String, String> httpHeaders, java.io.InputStream entityStream) throws java.io.IOException, WebApplicationException; }",
"import java.io.IOException; import java.io.InputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import javax.ws.rs.Consumes; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.ext.MessageBodyReader; import javax.ws.rs.ext.Provider; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Source; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamSource; import org.w3c.dom.Document; import org.apache.cxf.jaxrs.ext.xml.XMLSource; @Provider @Consumes({\"application/xml\", \"application/*+xml\", \"text/xml\", \"text/html\" }) public class SourceProvider implements MessageBodyReader<Object> { public boolean isReadable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mt) { return Source.class.isAssignableFrom(type) || XMLSource.class.isAssignableFrom(type); } public Object readFrom(Class<Object> source, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap<String, String> httpHeaders, InputStream is) throws IOException { if (DOMSource.class.isAssignableFrom(source)) { Document doc = null; DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder; try { builder = factory.newDocumentBuilder(); doc = builder.parse(is); } catch (Exception e) { IOException ioex = new IOException(\"Problem creating a Source object\"); ioex.setStackTrace(e.getStackTrace()); throw ioex; } return new DOMSource(doc); } else if (StreamSource.class.isAssignableFrom(source) || Source.class.isAssignableFrom(source)) { return new StreamSource(is); } else if (XMLSource.class.isAssignableFrom(source)) { return new XMLSource(is); } throw new IOException(\"Unrecognized source\"); } }",
"package javax.ws.rs.ext; public interface MessageBodyWriter<T> { public boolean isWriteable(java.lang.Class<?> type, java.lang.reflect.Type genericType, java.lang.annotation.Annotation[] annotations, javax.ws.rs.core.MediaType mediaType); public long getSize(T t, java.lang.Class<?> type, java.lang.reflect.Type genericType, java.lang.annotation.Annotation[] annotations, javax.ws.rs.core.MediaType mediaType); public void writeTo(T t, java.lang.Class<?> type, java.lang.reflect.Type genericType, java.lang.annotation.Annotation[] annotations, javax.ws.rs.core.MediaType mediaType, javax.ws.rs.core.MultivaluedMap<String, Object> httpHeaders, java.io.OutputStream entityStream) throws java.io.IOException, WebApplicationException; }",
"import java.io.IOException; import java.io.OutputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import javax.ws.rs.Produces; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.ext.MessageBodyWriter; import javax.ws.rs.ext.Provider; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerFactory; import javax.xml.transform.stream.StreamResult; import org.w3c.dom.Document; import org.apache.cxf.jaxrs.ext.xml.XMLSource; @Provider @Produces({\"application/xml\", \"application/*+xml\", \"text/xml\" }) public class SourceProvider implements MessageBodyWriter<Source> { public boolean isWriteable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mt) { return Source.class.isAssignableFrom(type); } public void writeTo(Source source, Class<?> clazz, Type genericType, Annotation[] annotations, MediaType mediatype, MultivaluedMap<String, Object> httpHeaders, OutputStream os) throws IOException { StreamResult result = new StreamResult(os); TransformerFactory tf = TransformerFactory.newInstance(); try { Transformer t = tf.newTransformer(); t.transform(source, result); } catch (TransformerException te) { te.printStackTrace(); throw new WebApplicationException(te); } } public long getSize(Source source, Class<?> type, Type genericType, Annotation[] annotations, MediaType mt) { return -1; } }",
"<beans ...> <jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:providers> <bean id=\"isProvider\" class=\"com.bar.providers.InputStreamProvider\"/> <bean id=\"longProvider\" class=\"com.bar.providers.LongProvider\"/> </jaxrs:providers> </jaxrs:server> </beans>",
"import org.apache.cxf.jaxrs.JAXRSServerFactoryBean; JAXRSServerFactoryBean sf = new JAXRSServerFactoryBean(); SourceProvider provider = new SourceProvider(); sf.setProvider(provider);"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/restentitytypes |
5.2.12. /proc/iomem | 5.2.12. /proc/iomem This file shows you the current map of the system's memory for each physical device: The first column displays the memory registers used by each of the different types of memory. The second column lists the kind of memory located within those registers and displays which memory registers are used by the kernel within the system RAM or, if the network interface card has multiple Ethernet ports, the memory registers assigned for each port. | [
"00000000-0009fbff : System RAM 0009fc00-0009ffff : reserved 000a0000-000bffff : Video RAM area 000c0000-000c7fff : Video ROM 000f0000-000fffff : System ROM 00100000-07ffffff : System RAM 00100000-00291ba8 : Kernel code 00291ba9-002e09cb : Kernel data e0000000-e3ffffff : VIA Technologies, Inc. VT82C597 [Apollo VP3] e4000000-e7ffffff : PCI Bus #01 e4000000-e4003fff : Matrox Graphics, Inc. MGA G200 AGP e5000000-e57fffff : Matrox Graphics, Inc. MGA G200 AGP e8000000-e8ffffff : PCI Bus #01 e8000000-e8ffffff : Matrox Graphics, Inc. MGA G200 AGP ea000000-ea00007f : Digital Equipment Corporation DECchip 21140 [FasterNet] ea000000-ea00007f : tulip ffff0000-ffffffff : reserved"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-iomem |
Chapter 5. Upgrading the Migration Toolkit for Containers | Chapter 5. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.12 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 4.5, and earlier versions, by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 5.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.12 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.12 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 5.2. Upgrading the Migration Toolkit for Containers to 1.8.0 To upgrade the Migration Toolkit for Containers to 1.8.0, complete the following steps. Procedure Determine subscription names and current channels to work with for upgrading by using one of the following methods: Determine the subscription names and channels by running the following command: USD oc -n openshift-migration get sub Example output NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0 Or return the subscription names and channels in JSON by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "package": "mtc-operator", "channel": "release-v1.7" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "package": "redhat-oadp-operator", "channel": "stable-1.0" } For each subscription, patch to move from the MTC 1.7 channel to the MTC 1.8 channel by running the following command: USD oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{"spec": {"channel": "release-v1.8"}}' Example output subscription.operators.coreos.com/mtc-operator patched 5.2.1. Upgrading OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0 To upgrade OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0, complete the following steps. Procedure For each subscription, patch the OADP operator from OADP 1.0 to OADP 1.2 by running the following command: USD oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{"spec": {"channel":"stable-1.2"}}' Note Sections indicating the user-specific returned NAME values that are used for the installation of MTC & OADP, respectively. Example output subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched Note The returned value will be similar to redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace , which is used in this example. If the installPlanApproval parameter is set to Automatic , the Operator Lifecycle Manager (OLM) begins the upgrade process. If the installPlanApproval parameter is set to Manual , you must approve each installPlan before the OLM begins the upgrades. Verification Verify that the OLM has completed the upgrades of OADP and MTC by running the following command: USD oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (."state"=="AtLatestKnown")' When a value of true is returned, verify the channel used for each subscription by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "channel": "release-v1.8" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "channel": "stable-1.2" } USD oc -n openshift-migration get csv Example output NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded 5.3. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform versions 4.2 to 4.5 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform versions 4.2 to 4.5 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/controller.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 5.4. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration | [
"oc -n openshift-migration get sub",
"NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"package\": \"mtc-operator\", \"channel\": \"release-v1.7\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"package\": \"redhat-oadp-operator\", \"channel\": \"stable-1.0\" }",
"oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{\"spec\": {\"channel\": \"release-v1.8\"}}'",
"subscription.operators.coreos.com/mtc-operator patched",
"oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{\"spec\": {\"channel\":\"stable-1.2\"}}'",
"subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched",
"oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (.\"state\"==\"AtLatestKnown\")'",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"channel\": \"release-v1.8\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"channel\": \"stable-1.2\" }",
"Confirm that the `mtc-operator.v1.8.0` and `oadp-operator.v1.2.x` packages are installed by running the following command:",
"oc -n openshift-migration get csv",
"NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.8):/controller.yml ./",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/upgrading-mtc |
Chapter 5. Tools for administration of Red Hat Satellite | Chapter 5. Tools for administration of Red Hat Satellite You can use multiple tools to manage Red Hat Satellite. 5.1. Satellite web UI overview You can manage and monitor your Satellite infrastructure from a browser with the Satellite web UI. For example, you can use the following navigation features in the Satellite web UI: Navigation feature Description Organization dropdown Choose the organization you want to manage. Location dropdown Choose the location you want to manage. Monitor Provides summary dashboards and reports. Content Provides content management tools. This includes content views, activation keys, and lifecycle environments. Hosts Provides host inventory and provisioning configuration tools. Configure Provides general configuration tools and data, including host groups and Ansible content. Infrastructure Provides tools on configuring how Satellite interacts with the environment. Provides event notifications to keep administrators informed of important environment changes. Administer Provides advanced configuration for settings such as users, role-based access control (RBAC), and general settings. 5.2. Hammer CLI overview You can configure and manage your Satellite Server with CLI commands by using Hammer. Using Hammer has the following benefits: Create shell scripts based on Hammer commands for basic task automation. Redirect output from Hammer to other tools. Use the --debug option with Hammer to test responses to API calls before applying the API calls in a script. For example: hammer --debug organization list . To issue Hammer commands, a user must have access to your Satellite Server. Note To ensure a user-friendly and intuitive experience, the Satellite web UI takes priority when developing new functionality. Therefore, some features that are available in the Satellite web UI might not yet be available for Hammer. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, scripts that use API commands communicate directly with the Satellite API and they establish the binding only once. Additional resources See Using the Hammer CLI tool for details on using Hammer CLI. 5.3. Satellite API overview You can write custom scripts and external applications that access the Satellite API over HTTP with the Representational State Transfer (REST) API provided by Satellite Server. Use the REST API to integrate with enterprise IT systems and third-party applications, perform automated maintenance or error checking tasks, and automate repetitive tasks with scripts. Using the REST API has the following benefits: Configure any programming language, framework, or system with support for HTTP protocol to use the API. Create client applications that require minimal knowledge of the Satellite infrastructure because users discover many details at runtime. Adopt the resource-based REST model for intuitively managing a virtualization platform. Scripts based on API commands communicate directly with the Satellite API, which makes them faster than scripts based on Hammer commands or Ansible Playbooks relying on modules within redhat.satellite. Important API commands differ between versions of Satellite. When you prepare to upgrade Satellite Server, update all the scripts that contain Satellite API commands. Additional resources See Using the Satellite REST API for details on using the Satellite API. 5.4. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts from Capsules by using shell scripts or Ansible roles and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in Managing hosts . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . See Configuring and Setting Up Remote Jobs in Managing configurations by using Ansible integration . 5.5. Managing Satellite with Ansible collections Satellite Ansible Collections is a set of Ansible modules that interact with the Satellite API. You can manage and automate many aspects of Satellite with Satellite Ansible collections. Additional resources See Managing configurations by using Ansible integration . See Administering Red Hat Satellite . 5.6. Kickstart workflow You can automate the installation process of a Satellite Server or Capsule Server by creating a Kickstart file that contains all the information that is required for the installation. When you run a Red Hat Satellite Kickstart script, the script performs the following actions: It specifies the installation location of a Satellite Server or a Capsule Server. It installs the predefined packages. It installs Subscription Manager. It uses Activation Keys to subscribe the hosts to Red Hat Satellite. It installs Puppet, and configures a puppet.conf file to indicate the Red Hat Satellite or Capsule instance. It enables Puppet to run and request a certificate. It runs user defined snippets. Additional resources For more information about Kickstart, see Performing an automated installation using Kickstart in Performing an advanced RHEL 8 installation . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/tools-for-administration-of-satellite_planning |
5.106. imsettings | 5.106. imsettings 5.106.1. RHBA-2012:0768 - imsettings bug fix update Updated imsettings packages that fix one bug are now available for Red Hat Enterprise Linux 6. IMSettings provides command line tools and a library to configure and control input-methods settings. Users normally access it through the "im-chooser" GUI tool. Bug Fix BZ# 713433 Prior to this update, the IMSettings daemon unexpectedly invalidated the pointer after obtaining a new pointer. This update modifies IMSettings so that the code is updated after all transactions are finished. All users of imsettings are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/imsettings |
Chapter 8. Message Persistence and Paging | Chapter 8. Message Persistence and Paging AMQ Broker 7 provides persistence through either a message journal or a JDBC store. The method by which the broker stores messages and pages them to disk is different than AMQ 6, and the configuration properties you use to configure message persistence are changed. 8.1. Message Persistence Changes AMQ Broker 7 uses a different type of message journal than AMQ 6, and it does not use a journal index. AMQ 6 used KahaDB for a message store, and it maintained a message journal index to track the position of each message inside the journal. This index enabled the broker to pull paged messages from its journal in batches and place them in its cache. By default, AMQ Broker 7 uses an in-memory message journal from which the broker can dispatch messages. Therefore, AMQ Broker 7 does not use a message journal index. If a broker instance runs out of memory, messages are paged as they arrive at the broker, but before they are queued. These message page files are stored on disk sequentially in the same order in which they arrived. Then, when memory is freed on the broker, the messages are moved from the page file to the journal on the broker. Because the journal is read sequentially, there is no need to keep an index of messages in the journal. In addition, AMQ Broker 7 also offers a different JDBC-based message journal option that was not available in AMQ 6. The AMQ Broker 7 message journal supports the following shared file systems: NFSv4 GFS2 Related Information For more information about the default in-memory message journal, see About Journal-based Persistence in Configuring AMQ Broker . For more information about the new JDBC-based persistence option, see Configuring JDBC Persistence in Configuring AMQ Broker . 8.2. How Message Persistence is Configured You use the BROKER_INSTANCE_DIR /etc/broker.xml configuration file to configure the broker instance's message journal. The broker.xml configuration file contains the following default message journal configuration properties: <core> <name>0.0.0.0</name> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <paging-directory>./data/paging</paging-directory> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <large-messages-directory>./data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-buffer-timeout>744000</journal-buffer-timeout> <disk-scan-period>5000</disk-scan-period> <max-disk-usage>90</max-disk-usage> <global-max-size>104857600</global-max-size> ... </core> To configure the message journal, you can change the default values for any of the journal configuration properties. You can also add additional configuration properties. 8.3. Message Persistence Configuration Property Changes AMQ 6 and AMQ Broker 7 both offer a number of configuration properties to control how the broker persists messages. This section compares the configuration properties in the AMQ 6 KahaDB journal to the equivalent properties in the AMQ Broker 7 in-memory message journal. For complete details on each message persistence configuration property for the in-memory message journal, see the following: The Bindings Journal in Configuring AMQ Broker Messaging Journal Configuration Elements in Configuring AMQ Broker 8.3.1. Journal Size and Management The following table compares the journal size and management configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The time interval between cleaning up data logs that are no longer used cleanupInterval The default is 30000 ms. No equivalent. In AMQ Broker 7, journal files that exceed the pool size are no longer used. The number of message store GC cycles that must be completed without cleaning up other files before compaction is triggered compactAcksAfterNoGC No equivalent. In AMQ Broker 7, compaction is not related to particular record types. Whether compaction should be run when the message store is still growing, or if it should only occur when it has stopped growing compactAcksIgnoresStoreGrowth The default is false . No equivalent. The minimum number of journal files that can be stored on the broker before it will compact them No equivalent. <journal-compact-min-files> The default is 10. If you set this value to 0, compaction will be deactivated. The threshold to reach before compaction starts No equivalent. <journal-compact-percentage> The default is 30%. When less than this percentage is considered to be live data, compaction will start. The path to the top-level folder that holds the message store's data files directory AMQ Broker 7 has a separate directory for each type of journal: <journal-directory> - The default is /data/journal . <bindings-directory> - The default is /data/bindings . <paging-directory> - The default is /data/paging . <large-message-directory> - The default is /data/large-messages . Whether the bindings directory should be created automatically if it does not already exist No equivalent. <create-bindings-dir> The default is true . Whether the journal directory should be created automatically if it does not already exist No equivalent. <create-journal-dir> The default is true . Whether the message store should periodically compact older journal log files that contain only message acknowledgements enableAckCompaction No equivalent. The maximum size of the data log files journalMaxFileLength The default is 32 MB. <journal-file-size> The default is 10485760 bytes (10 MiB). The policy that the broker should use to preallocate the journal files when a new journal file is needed preallocationStrategy The default is sparse_file . No equivalent. By default, preallocated journal files are typically filled with zeroes, but it can vary depending on the file system. The policy the broker should use to preallocate the journal files preallocationScope The default is entire_journal . AMQ Broker 7 automatically preallocates the journal files specified by <journal-min-files> when the broker instance is started. The journal type (either NIO or AIO) No equivalent. <journal-type> You can choose either NIO (Java NIO journal), or ASYNCIO (Linux asynchronous I/O journal). The minimum number of files that the journal should maintain No equivalent. <journal-min-files> The number of journal files the broker should keep when reclaiming files No equivalent. <journal-pool-files> The default is -1, which means the broker instance will never delete files on the journal once created. 8.3.2. Write Boundaries The following table compares the write boundary configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The time interval between writing the metadata cache to disk checkpointInterval The default is 5000 ms. No equivalent. Whether the message store should dispatch queue messages to clients concurrently with message storage concurrentStoreAndDispatchQueues The default is true . No equivalent. Whether the message store should dispatch topic messages to interested clients concurrently with message storage concurrentStoreAndDispatchTopics The default is false . No equivalent. Whether a disk sync should be performed after each non-transactional journal write enableJournalDiskSyncs The default is true . <journal-sync-transactional> Flushes transaction data to disk whenever a transaction boundary is reached (commit, prepare, and rollback). The default is true . <journal-sync-nontransactional> Flushes non-transactional message data to disk (sends and acknowledgements). The default is true . When to flush the entire journal buffer No equivalent. <journal-buffer-timeout> The default for NIO is 3,333,333 nanoseconds, and the default for AIO is 500,000 nanoseconds. The amount of data to buffer between journal disk writes journalMaxWriteBatchSize The default is 4000 bytes. No equivalent. The size of the task queue used to buffer the journal's write requests maxAsyncJobs The default is 10000. <journal-max-io> This property controls the maximum number of write requests that can be in the I/O queue at any given point. The default for NIO is 1, and the default for AIO is 500. Whether to use fdatasync on journal writes No equivalent. <journal-datasync> The default is true . 8.3.3. Index Configuration AMQ 6 has a number of configuration properties for configuring the journal index. Because AMQ Broker 7 does not use journal indexes, you do not need to configure any of these properties for your broker instance. 8.3.4. Journal Archival AMQ 6 has several configuration properties for controlling which files are archived and where the archives are stored. In AMQ Broker 7, however, when old journal files are no longer needed, the broker reuses them instead of archiving them. Therefore, you do not need to configure any journal archival properties for your broker instance. 8.3.5. Journal Recovery AMQ 6 has several configuration properties for controlling how the broker checks for corrupted journal files and what to do when it encounters a missing journal file. In AMQ Broker 7, however, you do not need to configure any journal recovery properties for your broker instance. Journal files have a different format in AMQ Broker 7, which should prevent a corrupted entry in the journal from corrupting the entire journal file. Even if the journal is partially damaged, the broker should still be able to extract data from the undamaged entries. | [
"<core> <name>0.0.0.0</name> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <paging-directory>./data/paging</paging-directory> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <large-messages-directory>./data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-buffer-timeout>744000</journal-buffer-timeout> <disk-scan-period>5000</disk-scan-period> <max-disk-usage>90</max-disk-usage> <global-max-size>104857600</global-max-size> </core>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/message_persistence |
Chapter 3. Using source-to-image for OpenShift | Chapter 3. Using source-to-image for OpenShift You can use the source-to-image (S2I) for OpenShift image to run your custom Java applications on OpenShift. 3.1. Building and deploying Java applications with source-to-image for OpenShift To build and deploy a Java application from source on OpenShift by using the source-to-image (S2I) for OpenShift image, use the OpenShift S2I process. Procedure Log in to the OpenShift instance by running the following command and by providing your credentials: Create a new project: Create a new application using the S2I for OpenShift image: The <source-location> is the URL of GitHub repository or path to a local folder. For example: Get the service name: Expose the service as a route, so that you can use the server from your browser: Get the route: Access the application in your browser by using the URL. Use the value of HOST/PORT field from the command's output. Additional resources For more detailed example, see the Running flat classpath JAR on source-to-image for OpenShift . 3.2. Building and deploying Java applications from binary artifacts You can deploy your existing Java applications on OpenShift by using the binary source capability. The procedure uses undertow-servlet quickstart to build a Java application on your local machine. The quickstart copies the resulting binary Artifacts into OpenShift by using the S2I binary source capability. Prerequisites Enable Red Hat JBoss Enterprise Maven Repository on your local machine. Get the JAR application archive and build the application locally. Clone the undertow-servlet source code: Build the application: Prepare the directory structure on the local file system. Copy the application archives in the deployments/ sub-directory (where the main binary build directory) to the standard deployments folder (where the image is build on OpenShift). Structure the directory hierarchy containing the web application data for the application to deploy. Create a main directory for the binary build on the local file system and deployments/ subdirectory within it. Copy the built JAR archive to the deployments/ subdirectory: Procedure Log in to the OpenShift instance by running the following command and by providing your credentials: Create a new project: Create a new binary build, and specify the image stream and the application's name: Start the binary build. Instruct the oc executable to use main directory of the binary build you have created in step as the directory containing binary input for the OpenShift build: Create a new OpenShift application based on the build: Expose the service as route. Get the route: Access the application in your browser by using the URL (value of HOST/PORT field from the command output). Additional resources Use the binary source capability to deploy existing Java applications on OpenShift. For more information on how to configure maven repository, see Use the Maven Repository . | [
"oc login",
"oc new-project <project-name>",
"oc new-app <source-location>",
"oc new-app --context-dir=getting-started --name=quarkus-quickstart 'registry.access.redhat.com/ubi8/openjdk-11~https://github.com/quarkusio/quarkus-quickstarts.git#2.12.1.Final'",
"oc get svc",
"oc expose svc/ --port=8080",
"oc get route",
"git clone https://github.com/jboss-openshift/openshift-quickstarts.git",
"cd openshift-quickstarts/undertow-servlet/",
"mvn clean package [INFO] Scanning for projects [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Undertow Servlet Example 1.0.0.Final [INFO] ------------------------------------------------------------------------ [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.986 s [INFO] Finished at: 2017-06-27T16:43:07+02:00 [INFO] Final Memory: 19M/281M [INFO] ------------------------------------------------------------------------",
"undertow-servlet]USD ls dependency-reduced-pom.xml pom.xml README src target",
"mkdir -p ocp/deployments",
"cp target/undertow-servlet.jar ocp/deployments/",
"oc login",
"oc new-project jdk-bin-demo",
"oc new-build --binary=true --name=jdk-us-app --image-stream=java:11 --> Found image c1f5b31 (2 months old) in image stream \"openshift/java:11\" under tag \"latest\" for \"java:11\" Java Applications ----------------- Platform for building and running plain Java applications (fat-jar and flat classpath) --> Creating resources with label build=jdk-us-app imagestream \"jdk-us-app\" created buildconfig \"jdk-us-app\" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/jdk-us-app'",
"oc start-build jdk-us-app --from-dir=./ocp --follow Uploading directory \"ocp\" as binary input for the build build \"jdk-us-app-1\" started Receiving source from STDIN as archive ================================================================== Starting S2I Java Build .. S2I source build with plain binaries detected Copying binaries from /tmp/src/deployments to /deployments ... done Pushing image 172.30.197.203:5000/jdk-bin-demo/jdk-us-app:latest Pushed 0/6 layers, 2% complete Pushed 1/6 layers, 24% complete Pushed 2/6 layers, 36% complete Pushed 3/6 layers, 54% complete Pushed 4/6 layers, 71% complete Pushed 5/6 layers, 95% complete Pushed 6/6 layers, 100% complete Push successful",
"oc new-app jdk-us-app --> Found image 66f4e0b (About a minute old) in image stream \"jdk-bin-demo/jdk-us-app\" under tag \"latest\" for \"jdk-us-app\" jdk-bin-demo/jdk-us-app-1:c1dbfb7a ---------------------------------- Platform for building and running plain Java applications (fat-jar and flat classpath) Tags: builder, java * This image will be deployed in deployment config \"jdk-us-app\" * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service \"jdk-us-app\" * Other containers can access this service through the hostname \"jdk-us-app\" --> Creating resources deploymentconfig \"jdk-us-app\" created service \"jdk-us-app\" created --> Success Run 'oc status' to view your app.",
"oc expose svc/jdk-us-app route \"jdk-us-app\" exposed",
"oc get route"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_8/using-java-s2i-openshift |
Installing | Installing Red Hat Advanced Cluster Security for Kubernetes 4.5 Installing Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | [
"/sys/kernel/btf/vmlinux /boot/vmlinux-<kernel-version> /lib/modules/<kernel-version>/vmlinux-<kernel-version> /lib/modules/<kernel-version>/build/vmlinux /usr/lib/modules/<kernel-version>/kernel/vmlinux /usr/lib/debug/boot/vmlinux-<kernel-version> /usr/lib/debug/boot/vmlinux-<kernel-version>.debug /usr/lib/debug/lib/modules/<kernel-version>/vmlinux",
"spec: central: declarativeConfiguration: configMaps: - name: \"<declarative-configs>\" 1 secrets: - name: \"<sensitive-declarative-configs>\" 2",
"CREATE USER stackrox WITH PASSWORD <password>;",
"CREATE DATABASE stackrox;",
"\\connect stackrox",
"CREATE SCHEMA stackrox;",
"REVOKE CREATE ON SCHEMA public FROM PUBLIC; REVOKE USAGE ON SCHEMA public FROM PUBLIC; REVOKE ALL ON DATABASE stackrox FROM PUBLIC;",
"CREATE ROLE readwrite;",
"GRANT CONNECT ON DATABASE stackrox TO readwrite;",
"GRANT USAGE ON SCHEMA stackrox TO readwrite; GRANT USAGE, CREATE ON SCHEMA stackrox TO readwrite; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; GRANT USAGE ON ALL SEQUENCES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT USAGE ON SEQUENCES TO readwrite;",
"GRANT readwrite TO stackrox;",
"oc create secret generic external-db-password \\ 1 --from-file=password=<password.txt> 2",
"spec: central: declarativeConfiguration: configMaps: - name: <declarative-configs> 1 secrets: - name: <sensitive-declarative-configs> 2",
"spec: tls: additionalCAs: - name: db-ca content: | <certificate>",
"oc -n stackrox get secret central-htpasswd -o go-template='{{index .data \"password\" | base64decode}}'",
"oc -n stackrox get route central -o jsonpath=\"{.status.ingress[0].host}\"",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.route.enabled=true",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.loadBalancer.enabled=true",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> 2",
"env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain",
"env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain",
"htpasswd: | admin:<bcrypt-hash>",
"central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1",
"helm upgrade -n stackrox stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f <path_to_init_bundle_file -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"roxctl central generate interactive",
"Enter path to the backup bundle from which to restore keys and certificates (optional): Enter read templates from local filesystem (default: \"false\"): Enter path to helm templates on your local filesystem (default: \"/path\"): Enter PEM cert bundle file (optional): 1 Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: \"true\"): 2 Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: \"development_build\"): Enter the directory to output the deployment bundle to (default: \"central-bundle\"): Enter the OpenShift major version (3 or 4) to deploy on (default: \"0\"): Enter whether to enable telemetry (default: \"false\"): Enter central-db image to use (if unset, a default will be used according to --image-defaults): Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: \"none\"): 3 Enter main image to use (if unset, a default will be used according to --image-defaults): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: \"false\"): Enter list of secrets to add as declarative configuration mounts in central (default: \"[]\"): 4 Enter list of config maps to add as declarative configuration mounts in central (default: \"[]\"): 5 Enter the deployment tool to use (kubectl, helm, helm-values) (default: \"kubectl\"): Enter scanner-db image to use (if unset, a default will be used according to --image-defaults): Enter scanner image to use (if unset, a default will be used according to --image-defaults): Enter Central volume type (hostpath, pvc): 6 Enter external volume name for Central (default: \"stackrox-db\"): Enter external volume size in Gi for Central (default: \"100\"): Enter storage class name for Central (optional if you have a default StorageClass configured): Enter external volume name for Central DB (default: \"central-db\"): Enter external volume size in Gi for Central DB (default: \"100\"): Enter storage class name for Central DB (optional if you have a default StorageClass configured):",
"sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>",
"./central-bundle/central/scripts/setup.sh",
"oc create -R -f central-bundle/central",
"oc get pod -n stackrox -w",
"cat central-bundle/password",
"overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\\.amazonaws\\.com/role-arn value: \"\\\"arn:aws:iam:1234:role\\\"\"",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml",
"oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> 3 --set scanner.disable=false 4",
"customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3",
"helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"oc get pod -n stackrox -w",
"kubectl get pod -n stackrox -w",
"overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\\.amazonaws\\.com/role-arn value: \"\\\"arn:aws:iam:1234:role\\\"\"",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false",
"apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP",
"oc get route central -n stackrox",
"oc get service central-loadbalancer -n stackrox",
"oc port-forward svc/central 18443:443 -n stackrox",
"oc new-project test",
"oc run shell --labels=app=shellshock,team=test-team --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test oc run samba --labels=app=rce --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.route.enabled=true",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> \\ 2 --set central.exposure.loadBalancer.enabled=true",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services --set imagePullSecrets.username=<username> \\ 1 --set imagePullSecrets.password=<password> 2",
"env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain",
"env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain",
"htpasswd: | admin:<bcrypt-hash>",
"central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1",
"helm upgrade -n stackrox stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f <path_to_init_bundle_file -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"roxctl central generate interactive",
"Enter path to the backup bundle from which to restore keys and certificates (optional): Enter read templates from local filesystem (default: \"false\"): Enter path to helm templates on your local filesystem (default: \"/path\"): Enter PEM cert bundle file (optional): 1 Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: \"true\"): 2 Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: \"development_build\"): Enter the directory to output the deployment bundle to (default: \"central-bundle\"): Enter the OpenShift major version (3 or 4) to deploy on (default: \"0\"): Enter whether to enable telemetry (default: \"false\"): Enter central-db image to use (if unset, a default will be used according to --image-defaults): Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: \"none\"): 3 Enter main image to use (if unset, a default will be used according to --image-defaults): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: \"false\"): Enter list of secrets to add as declarative configuration mounts in central (default: \"[]\"): 4 Enter list of config maps to add as declarative configuration mounts in central (default: \"[]\"): 5 Enter the deployment tool to use (kubectl, helm, helm-values) (default: \"kubectl\"): Enter scanner-db image to use (if unset, a default will be used according to --image-defaults): Enter scanner image to use (if unset, a default will be used according to --image-defaults): Enter Central volume type (hostpath, pvc): 6 Enter external volume name for Central (default: \"stackrox-db\"): Enter external volume size in Gi for Central (default: \"100\"): Enter storage class name for Central (optional if you have a default StorageClass configured): Enter external volume name for Central DB (default: \"central-db\"): Enter external volume size in Gi for Central DB (default: \"100\"): Enter storage class name for Central DB (optional if you have a default StorageClass configured):",
"sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>",
"./central-bundle/central/scripts/setup.sh",
"oc create -R -f central-bundle/central",
"oc get pod -n stackrox -w",
"cat central-bundle/password",
"export ROX_API_TOKEN=<api_token>",
"export ROX_CENTRAL_ADDRESS=<address>:<port_number>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output <cluster_init_bundle_name> cluster_init_bundle.yaml",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate --output-secrets <cluster_init_bundle_name> cluster_init_bundle.yaml",
"oc create -f <init_bundle>.yaml \\ 1 -n <stackrox> 2",
"kubectl create namespace stackrox 1 kubectl create -f <init_bundle>.yaml \\ 2 -n <stackrox> 3",
"helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/",
"helm search repo -l rhacs/",
"customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"",
"helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \\ 1 --set imagePullSecrets.username=<username> \\ 2 --set imagePullSecrets.password=<password> 3",
"helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"kubectl get pod -n stackrox -w",
"kubectl get service central-loadbalancer -n stackrox",
"kubectl port-forward svc/central 18443:443 -n stackrox",
"kubectl create namespace test",
"kubectl run shell --labels=app=shellshock,team=test-team --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test kubectl run samba --labels=app=rce --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test",
"oc delete namespace stackrox",
"kubectl delete namespace stackrox",
"oc get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs oc delete --wait",
"oc delete scc -l \"app.kubernetes.io/name=stackrox\"",
"oc delete ValidatingWebhookConfiguration stackrox",
"kubectl get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs kubectl delete --wait",
"kubectl delete ValidatingWebhookConfiguration stackrox",
"for namespace in USD(oc get ns | tail -n +2 | awk '{print USD1}'); do oc label namespace USDnamespace namespace.metadata.stackrox.io/id-; oc label namespace USDnamespace namespace.metadata.stackrox.io/name-; oc annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done",
"for namespace in USD(kubectl get ns | tail -n +2 | awk '{print USD1}'); do kubectl label namespace USDnamespace namespace.metadata.stackrox.io/id-; kubectl label namespace USDnamespace namespace.metadata.stackrox.io/name-; kubectl annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/installing/index |
APIs | APIs Red Hat Advanced Cluster Management for Kubernetes 2.11 APIs | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/apis/index |
7.228. vsftpd | 7.228. vsftpd 7.228.1. RHBA-2015:1408 - vsftpd bug fix update Updated vsftpd packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The vsftpd packages include a Very Secure File Transfer Protocol (FTP) daemon, which is used to serve files over a network. Bug Fixes BZ# 1063401 Prior to this update, the "local_max_rate" option did not work as expected. As a consequence, the transmission speed was significantly lower. This update extends the types of variables for calculating and accumulating the amount of transferred data and postpones the start of evaluation after the tenth evaluation. BZ# 1092877 Previously, vsftpd server could not handle the use of "pam_exec.so" in the "pam.d" configuration file. Consequently, the vsftpd server considered new processes created by the "pam_exe.so" module to be its own and therefore attempted to catch them. When the processes were caught by "pam_exec.so", the vsftpd server became unresponsive. A patch has been applied to fix this bug, and the vsftpd server no longer hangs in the described situation. Users of vsftpd are advised to upgrade to these updated packages, which fix these bugs. The vsftpd daemon must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-vsftpd |
Chapter 5. Reference | Chapter 5. Reference 5.1. Version details The following table lists versions of technologies used in this image. Table 5.1. Technology versions used in this image Technology Version Red Hat build of OpenJDK 11 Jolokia 1.6.2 Maven 3.6 5.2. Information environment variables The following information environment variables are designed to convey information about the image. Do not modify these variables. Table 5.2. Information environment variables Variable Name Value HOME /home/jboss JAVA_HOME /usr/lib/jvm/java-11 JAVA_VENDOR openjdk JAVA_VERSION 11 JOLOKIA_VERSION 1.6.2 LD_PRELOAD libnss_wrapper.so MAVEN_VERSION 3.6 NSS_WRAPPER_GROUP /etc/group NSS_WRAPPER_PASSWD /home/jboss/passwd 5.3. Configuration environment variables Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired. Table 5.3. Configuration environment variables Variable name Description Default value Example value AB_JOLOKIA_CONFIG If set uses this file (including path) as Jolokia JVM agent properties (as described in the Jolokia reference manual ). If not set, the /opt/jolokia/etc/jolokia.properties will be created using the settings as defined in the manual. Otherwise the rest of the settings in this document are ignored. - /opt/jolokia/custom.properties AB_JOLOKIA_DISCOVERY_ENABLED Enable Jolokia discovery. false true AB_JOLOKIA_HOST Host address to bind to. 0.0.0.0 127.0.0.1 AB_JOLOKIA_ID Agent ID to use, which is the container id. USDHOSTNAME openjdk-app-1-xqlsj AB_JOLOKIA_OFF If set disables activation of Joloka (that is, echos an empty value). Jolokia is enabled true AB_JOLOKIA_OPTS Additional options to be appended to the agent configuration. They should be specified in the format key=value,key=value,... . - backlog=20 AB_JOLOKIA_PASSWORD Password for basic authentication. By default authentication is switched off. - mypassword AB_JOLOKIA_PORT Port to listen to. 8778 5432 AB_JOLOKIA_USER User for basic authentication. jolokia myusername AB_PROMETHEUS_ENABLE Enable the use of the Prometheus agent. - True AB_PROMETHEUS_JMX_EXPORTER_PORT Port to use for the Prometheus JMX Exporter. - 9799 CONTAINER_CORE_LIMIT A calculated core limit as described in the CFS Bandwidth Control . - 2 CONTAINER_MAX_MEMORY Memory limit assigned to the container. - 1024 GC_ADAPTIVE_SIZE_POLICY_WEIGHT The weighting given to the current garbage collector time versus garbage collector times. - 90 GC_CONTAINER_OPTIONS Specify Java GC to use. The value of this variable should contain the necessary JRE command-line options to specify the required GC, which will override the default value. -XX:+UseParallelOldGC -XX:+UseG1GC GC_MAX_HEAP_FREE_RATIO Maximum percentage of heap free after GC to avoid shrinkage. - 40 GC_MAX_METASPACE_SIZE The maximum metaspace size. - 100 GC_METASPACE_SIZE The initial metaspace size. - 20 GC_MIN_HEAP_FREE_RATIO Minimum percentage of heap free after GC to avoid expansion. - 20 GC_TIME_RATIO Specifies the ratio of the time spent outside the garbage collection (for example, the time spent for application execution) to the time spent in the garbage collection. - 4 HTTPS_PROXY The location of the HTTPS proxy. This takes precedence over http_proxy and HTTP_PROXY , and will be used for both Maven builds and Java runtime. - [email protected]:8080 HTTP_PROXY The location of the HTTP proxy. This will be used for both Maven builds and Java runtime. - 127.0.0.1:8080 JAVA_APP_DIR The directory where the application resides. All paths in your application are relative to this directory. - myapplication/ JAVA_ARGS Arguments passed to the java application. - - JAVA_CLASSPATH The classpath to use. If not given, the startup script checks for a file JAVA_APP_DIR/classpath and uses its content literally as classpath. If this file does not exists all jars in the app dir are added ( classes:JAVA_APP_DIR/ ). - - JAVA_DEBUG If set remote debugging will be switched on. false true JAVA_DEBUG_PORT Port used for remote debugging. 5005 8787 JAVA_DIAGNOSTICS Set this to print some diagnostics information to standard output during the command is running. false true JAVA_INITIAL_MEM_RATIO It is used when no -Xms option is given in JAVA_OPTS . This is used to calculate a default initial heap memory based on the maximum heap memory. If used in a container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xms is set to a ratio of the -Xmx memory as set here. The default is 25 which means 25% of the -Xmx is used as the initial heap size. You can skip this mechanism by setting this value to 0 in which case no -Xms option is added. 25 25 JAVA_LIB_DIR Directory holding the Java jar files as well as an optional classpath file which holds the classpath. Either as a single-line classpath (colon separated) or with jar files listed line by line. If not set JAVA_LIB_DIR is set to the value of JAVA_APP_DIR . JAVA_APP_DIR - JAVA_MAIN_CLASS A main class to use as argument for java . When this environment variable is given, all jar files in JAVA_APP_DIR are added to the classpath as well as JAVA_LIB_DIR . - com.example.MainClass JAVA_MAX_INITIAL_MEM It is used when no -Xms option is given in JAVA_OPTS . This is used to calculate the maximum value of the initial heap memory. If used in a container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xms is limited to the value set here. The default is 4096 which means the calculated value of -Xms will never be greater than 4096. The value of this variable is expressed in MB. 4096 4096 JAVA_MAX_MEM_RATIO It is used when no -Xmx option is given in JAVA_OPTS . This is used to calculate a default maximum heap memory based on a containers restriction. If used in a container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xmx is set to a ratio of the container available memory as set here. The default is 50 which means 50% of the available memory is used as an upper boundary. You can skip this mechanism by setting this value to 0 in which case no -Xmx option is added. 50 - JAVA_OPTS JVM options passed to the java command. - -verbose:class JAVA_OPTS_APPEND User-specified Java options to be appended to generated options in JAVA_OPTS. - -Dsome.property=foo LOGGING_SCRIPT_DEBUG Set to true to enable script debugging. Deprecates SCRIPT_DEBUG . true True MAVEN_ARGS Arguments to use when calling Maven, replacing the default package hawt-app:build -DskipTests -e . Ensure that you run the hawt-app:build goal (when not already bound to the package execution phase), otherwise the startup scripts will not work. package hawt-app:build -DskipTests -e -e -Popenshift -DskipTests -Dcom.redhat.xpaas.repo.redhatga package MAVEN_ARGS_APPEND Additional Maven arguments. - -X -am -pl MAVEN_CLEAR_REPO If set then the Maven repository is removed after the artifact is built. This is useful for for reducing the size of the created application image small, but prevents incremental builds. Will be overridden by S2I_ENABLE_INCREMENTAL_BUILDS . false - MAVEN_LOCAL_REPO Directory to use as the local Maven repository. - /home/jboss/.m2/repository MAVEN_MIRRORS If set, multi-mirror support is enabled, and other MAVEN_MIRROR_* variables will be prefixed. For example, DEV_ONE_MAVEN_MIRROR_URL and QE_TWO_MAVEN_MIRROR_URL . - dev-one,qe-two MAVEN_MIRROR_URL The base URL of a mirror used for retrieving artifacts. - http://10.0.0.1:8080/repository/internal/ MAVEN_REPOS If set, multi-repo support is enabled, and other MAVEN_REPO_* variables will be prefixed. For example, DEV_ONE_MAVEN_REPO_URL and QE_TWO_MAVEN_REPO_URL . - dev-one,qe-two MAVEN_S2I_ARTIFACT_DIRS Relative paths of source directories to scan for build output, which will be copied to USDDEPLOY_DIR . target target MAVEN_S2I_GOALS Space-separated list of goals to be executed with Maven build. For example, mvn USDMAVEN_S2I_GOALS . package package install MAVEN_SETTINGS_XML Location of custom Maven settings.xml file to use. - /home/jboss/.m2/settings.xml NO_PROXY A comma-separated lists of hosts, IP addresses or domains that can be accessed directly. This will be used for both Maven builds and Java runtime. - foo.example.com,bar.example.com S2I_ARTIFACTS_DIR Location mount for artifacts persisted with save-artifacts script, which are used with incremental builds. This should not be overridden by end users. - USD{S2I_DESTINATION_DIR}/artifacts} S2I_DESTINATION_DIR Root directory for S2I mount, as specified by the io.openshift.s2i.destination label. This should not be overridden by end users. - /tmp S2I_ENABLE_INCREMENTAL_BUILDS Do not remove source and intermediate build files so they can be saved for use with future builds. true true S2I_IMAGE_SOURCE_MOUNTS Comma-separated list of relative paths in source directory that should be included in the image. List may include wildcards, which are expanded using find. By default, the contents of mounted directories are processed similarly to source folders, where the contents of USDS2I_SOURCE_CONFIGURATION_DIR , USDS2I_SOURCE_DATA_DIR , and USDS2I_SOURCE_DEPLOYMENTS_DIR are copied to their respective target directories. Alternatively, if an install.sh file is located in the root of the mount point, it is executed instead. Deprecates CUSTOM_INSTALL_DIRECTORIES . - extras/*` S2I_SOURCE_CONFIGURATION_DIR Relative path to directory containing application configuration files to be copied over to the product configuration directory, see S2I_TARGET_CONFIGURATION_DIR . configuration configuration S2I_SOURCE_DATA_DIR Relative path to directory containing application data files to be copied over to the product data directory, see S2I_TARGET_DATA_DIR . data data S2I_SOURCE_DEPLOYMENTS_DIR Relative path to directory containing binary files to be copied over to the product deployment directory, see S2I_TARGET_DEPLOYMENTS_DIR . deployments deployments S2I_SOURCE_DIR Location of mount for source code to be built. This should not be overridden by end users. - USD{S2I_DESTINATION_DIR}/src} S2I_TARGET_CONFIGURATION_DIR Absolute path to which files located in USDS2I_SOURCE_DIR USDS2I_SOURCE_CONFIGURATION_DIR are copied. - /opt/eap/standalone/configuration S2I_TARGET_DATA_DIR Absolute path to which files located in USDS2I_SOURCE_DIR/USDS2I_SOURCE_DATA_DIR are copied. - /opt/eap/standalone/data S2I_TARGET_DEPLOYMENTS_DIR Absolute path to which files located in USDS2I_SOURCE_DIR/USDS2I_SOURCE_DEPLOYMENTS_DIR are copied. Additionally, this is the directory to which build output is copied. - /deployments http_proxy The location of the HTTP proxy. This takes precedence over HTTP_PROXY and is use for both Maven builds and Java runtime. - http://127.0.0.1:8080 https_proxy The location of the HTTPS proxy. This takes precedence over HTTPS_PROXY , http_proxy , and HTTP_PROXY , is use for both Maven builds and Java runtime. - myuser:[email protected]:8080 no_proxy A comma-separated lists of hosts, IP addresses or domains that can be accessed directly. This takes precedence over NO_PROXY and is use for both Maven builds and Java runtime. - *.example.com prefix_MAVEN_MIRROR_ID ID to be used for the specified mirror. If omitted, a unique ID is generated. - internal-mirror prefix_MAVEN_MIRROR_OF Repository IDs mirrored by this entry. external:* - prefix_MAVEN_MIRROR_URL The URL of the mirror. - http://10.0.0.1:8080/repository/internal prefix_MAVEN_REPO_DIRECTORY_PERMISSIONS Maven repository directory permissions. - 775 prefix_MAVEN_REPO_FILE_PERMISSIONS Maven repository file permissions. - 664 prefix_MAVEN_REPO_HOST Maven repository host (if not using fully defined URL, it will fall back to service). - repo.example.com prefix_MAVEN_REPO_ID Maven repository id. - my-repo-id prefix_MAVEN_REPO_LAYOUT Maven repository layout. - default prefix_MAVEN_REPO_NAME Maven repository name. - my-repo-name prefix_MAVEN_REPO_PASSPHRASE Maven repository passphrase. - maven1! prefix_MAVEN_REPO_PASSWORD Maven repository password. - maven1! prefix_MAVEN_REPO_PATH Maven repository path (if not using fully defined URL, it will fall back to service). - /maven2/ prefix_MAVEN_REPO_PORT Maven repository port (if not using fully defined URL, it will fall back to service). - 8080 prefix_MAVEN_REPO_PRIVATE_KEY Maven repository private key. - USD{user.home}/.ssh/id_dsa prefix_MAVEN_REPO_PROTOCOL Maven repository protocol (if not using fully defined URL, it will fall back to service). - http prefix_MAVEN_REPO_RELEASES_CHECKSUM_POLICY Maven repository releases checksum policy. - warn prefix_MAVEN_REPO_RELEASES_ENABLED Maven repository releases enabled. - true prefix_MAVEN_REPO_RELEASES_UPDATE_POLICY Maven repository releases update policy. - always prefix_MAVEN_REPO_SERVICE Maven repository service to look up if prefix_MAVEN_REPO_URL not specified. - buscentr-myapp prefix_MAVEN_REPO_SNAPSHOTS_CHECKSUM_POLICY Maven repository snapshots checksum policy. - warn prefix_MAVEN_REPO_SNAPSHOTS_ENABLED Maven repository snapshots enabled. - true prefix_MAVEN_REPO_SNAPSHOTS_UPDATE_POLICY Maven repository snapshots update policy. - always prefix_MAVEN_REPO_URL Maven repository URL (fully defined). - http://repo.example.com:8080/maven2/ prefix_MAVEN_REPO_USERNAME Maven repository username. - mavenUser 5.3.1. Configuration environment variables with default values The following configuration Environment variables have default values specified that can be overridden. Table 5.4. Configuration environment variables with default values Variable name Description Defaul value AB_JOLOKIA_AUTH_OPENSHIFT Switch on client authentication for OpenShift TLS communication. The value of this parameter can be a relative distinguished name which must be contained in a presented client's certificate. Enabling this parameter will automatically switch Jolokia into HTTPS communication mode. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . true AB_JOLOKIA_HTTPS Switch on secure communication with HTTPS. By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS . true AB_JOLOKIA_PASSWORD_RANDOM Determines if a random AB_JOLOKIA_PASSWORD should be generated. Set to true to generate random password. Generated value will be written to /opt/jolokia/etc/jolokia.pw . true AB_PROMETHEUS_JMX_EXPORTER_CONFIG Path to configuration to use for the Prometheus JMX exporter. /opt/jboss/container/prometheus/etc/jmx-exporter-config.yaml S2I_SOURCE_DEPLOYMENTS_FILTER Space-separated list of filters to be applied when copying deployments. Defaults to * . * 5.4. Exposed ports The following table lists the exposed ports. Port Number Description 8080 HTTP 8443 HTTPS 8778 Jolokia Monitoring 5.5. Maven settings Default Maven settings with Maven arguments The default value of MAVEN_ARGS environment variable contains the -Dcom.redhat.xpaas.repo.redhatga property. This property activates a profile with the https://maven.repository.redhat.com/ga/ repository within the default jboss-settings.xml file, which resides in the S2I for OpenShift image. When specifying a custom value for the MAVEN_ARGS environment variable, if a custom source_dir/configuration/settings.xml file is not specified, the default jboss-settings.xml in the image is used. To specify which Maven repository will be used within the default jboss-settings.xml, there are two properties: The -Dcom.redhat.xpaas.repo.redhatga property, to use the https://maven.repository.redhat.com/ga/ repository. The -Dcom.redhat.xpaas.repo.jbossorg property to use the https://repository.jboss.org/nexus/content/groups/public/ repository. Provide custom Maven settings To specify a custom settings.xml file along with Maven arguments, create the source_dir/configuration directory and place the settings`.xml` file inside. Sample path should be similar to: source_dir/configuration/settings.xml . Revised on 2024-05-09 16:48:40 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_11/reference-s2i-openshift |
4.166. lsof | 4.166. lsof 4.166.1. RHEA-2011:1753 - lsof enhancement update An updateed lsof package that adds one enhancement is now available for Red Hat Enterprise Linux 6. The lsof package provides the LiSt Open Files (LSOF) tool to list information about files that are open and running on a Linux/UNIX system. Enhancement BZ# 671480 This enhancement update adds the new option +|-e s to lsof which exempts file systems with the path name "s" from being subjected to kernel function calls that might block. Note, that only the first +|-e argument is processed and the rest is ignored. All users of lsof are advised to upgrade to this updated package, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lsof |
Chapter 3. Keeping Your System Up-to-Date | Chapter 3. Keeping Your System Up-to-Date This chapter describes the process of keeping your system up-to-date, which involves planning and configuring the way security updates are installed, applying changes introduced by newly updated packages, and using the Red Hat Customer Portal for keeping track of security advisories. 3.1. Maintaining Installed Software As security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks. If the software is a part of a package within a Red Hat Enterprise Linux distribution that is currently supported, Red Hat is committed to releasing updated packages that fix the vulnerabilities as soon as possible. Often, announcements about a given security exploit are accompanied with a patch (or source code) that fixes the problem. This patch is then applied to the Red Hat Enterprise Linux package and tested and released as an erratum update. However, if an announcement does not include a patch, Red Hat developers first work with the maintainer of the software to fix the problem. Once the problem is fixed, the package is tested and released as an erratum update. If an erratum update is released for software used on your system, it is highly recommended that you update the affected packages as soon as possible to minimize the amount of time the system is potentially vulnerable. 3.1.1. Planning and Configuring Security Updates All software contains bugs. Often, these bugs can result in a vulnerability that can expose your system to malicious users. Packages that have not been updated are a common cause of computer intrusions. Implement a plan for installing security patches in a timely manner to quickly eliminate discovered vulnerabilities, so they cannot be exploited. Test security updates when they become available and schedule them for installation. Additional controls need to be used to protect the system during the time between the release of the update and its installation on the system. These controls depend on the exact vulnerability, but may include additional firewall rules, the use of external firewalls, or changes in software settings. Bugs in supported packages are fixed using the errata mechanism. An erratum consists of one or more RPM packages accompanied by a brief explanation of the problem that the particular erratum deals with. All errata are distributed to customers with active subscriptions through the Red Hat Subscription Management service. Errata that address security issues are called Red Hat Security Advisories . For more information on working with security errata, see Section 3.2.1, "Viewing Security Advisories on the Customer Portal" . For detailed information about the Red Hat Subscription Management service, including instructions on how to migrate from RHN Classic , see the documentation related to this service: Red Hat Subscription Management . 3.1.1.1. Using the Security Features of Yum The Yum package manager includes several security-related features that can be used to search, list, display, and install security errata. These features also make it possible to use Yum to install nothing but security updates. To check for security-related updates available for your system, enter the following command as root : Note that the above command runs in a non-interactive mode, so it can be used in scripts for automated checking whether there are any updates available. The command returns an exit value of 100 when there are any security updates available and 0 when there are not. On encountering an error, it returns 1 . Analogously, use the following command to only install security-related updates: Use the updateinfo subcommand to display or act upon information provided by repositories about available updates. The updateinfo subcommand itself accepts a number of commands, some of which pertain to security-related uses. See Table 3.1, "Security-related commands usable with yum updateinfo" for an overview of these commands. Table 3.1. Security-related commands usable with yum updateinfo Command Description advisory [ advisories ] Displays information about one or more advisories. Replace advisories with an advisory number or numbers. cves Displays the subset of information that pertains to CVE ( Common Vulnerabilities and Exposures ). security or sec Displays all security-related information. severity [ severity_level ] or sev [ severity_level ] Displays information about security-relevant packages of the supplied severity_level . 3.1.2. Updating and Installing Packages When updating software on a system, it is important to download the update from a trusted source. An attacker can easily rebuild a package with the same version number as the one that is supposed to fix the problem but with a different security exploit and release it on the Internet. If this happens, using security measures, such as verifying files against the original RPM , does not detect the exploit. Thus, it is very important to only download RPMs from trusted sources, such as from Red Hat, and to check the package signatures to verify their integrity. See the Yum chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for detailed information on how to use the Yum package manager. 3.1.2.1. Verifying Signed Packages All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU Privacy Guard , or GnuPG , a free software package used for ensuring the authenticity of distributed files. If the verification of a package signature fails, the package may be altered and therefore cannot be trusted. The Yum package manager allows for an automatic verification of all packages it installs or upgrades. This feature is enabled by default. To configure this option on your system, make sure the gpgcheck configuration directive is set to 1 in the /etc/yum.conf configuration file. Use the following command to manually verify package files on your filesystem: rpmkeys --checksig package_file.rpm See the Product Signing (GPG) Keys article on the Red Hat Customer Portal for additional information about Red Hat package-signing practices. 3.1.2.2. Installing Signed Packages To install verified packages (see Section 3.1.2.1, "Verifying Signed Packages" for information on how to verify packages) from your filesystem, use the yum install command as the root user as follows: yum install package_file.rpm Use a shell glob to install several packages at once. For example, the following commands installs all .rpm packages in the current directory: yum install *.rpm Important Before installing any security errata, be sure to read any special instructions contained in the erratum report and execute them accordingly. See Section 3.1.3, "Applying Changes Introduced by Installed Updates" for general instructions about applying changes made by errata updates. 3.1.3. Applying Changes Introduced by Installed Updates After downloading and installing security errata and updates, it is important to halt the usage of the old software and begin using the new software. How this is done depends on the type of software that has been updated. The following list itemizes the general categories of software and provides instructions for using updated versions after a package upgrade. Note In general, rebooting the system is the surest way to ensure that the latest version of a software package is used; however, this option is not always required, nor is it always available to the system administrator. Applications User-space applications are any programs that can be initiated by the user. Typically, such applications are used only when the user, a script, or an automated task utility launch them. Once such a user-space application is updated, halt any instances of the application on the system, and launch the program again to use the updated version. Kernel The kernel is the core software component for the Red Hat Enterprise Linux 7 operating system. It manages access to memory, the processor, and peripherals, and it schedules all tasks. Because of its central role, the kernel cannot be restarted without also rebooting the computer. Therefore, an updated version of the kernel cannot be used until the system is rebooted. KVM When the qemu-kvm and libvirt packages are updated, it is necessary to stop all guest virtual machines, reload relevant virtualization modules (or reboot the host system), and restart the virtual machines. Use the lsmod command to determine which modules from the following are loaded: kvm , kvm-intel , or kvm-amd . Then use the modprobe -r command to remove and subsequently the modprobe -a command to reload the affected modules. Fox example: Shared Libraries Shared libraries are units of code, such as glibc , that are used by a number of applications and services. Applications utilizing a shared library typically load the shared code when the application is initialized, so any applications using an updated library must be halted and relaunched. To determine which running applications link against a particular library, use the lsof command: lsof library For example, to determine which running applications link against the libwrap.so.0 library, type: This command returns a list of all the running programs that use TCP wrappers for host-access control. Therefore, any program listed must be halted and relaunched when the tcp_wrappers package is updated. systemd Services systemd services are persistent server programs usually launched during the boot process. Examples of systemd services include sshd or vsftpd . Because these programs usually persist in memory as long as a machine is running, each updated systemd service must be halted and relaunched after its package is upgraded. This can be done as the root user using the systemctl command: systemctl restart service_name Replace service_name with the name of the service you want to restart, such as sshd . Other Software Follow the instructions outlined by the resources linked below to correctly update the following applications. Red Hat Directory Server - See the Release Notes for the version of the Red Hat Directory Server in question at https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/ . Red Hat Enterprise Virtualization Manager - See the Installation Guide for the version of the Red Hat Enterprise Virtualization in question at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/ . | [
"~]# yum check-update --security Loaded plugins: langpacks, product-id, subscription-manager rhel-7-workstation-rpms/x86_64 | 3.4 kB 00:00:00 No packages needed for security; 0 packages available",
"~]# yum update --security",
"~]# lsmod | grep kvm kvm_intel 143031 0 kvm 460181 1 kvm_intel ~]# modprobe -r kvm-intel ~]# modprobe -r kvm ~]# modprobe -a kvm kvm-intel",
"~]# lsof /lib64/libwrap.so.0 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pulseaudi 12363 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-set 12365 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-she 12454 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-Keeping_Your_System_Up-to-Date |
Chapter 3. Creating and building an application using the web console | Chapter 3. Creating and building an application using the web console 3.1. Before you begin Review Accessing the web console . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. 3.2. Logging in to the web console You can log in to the OpenShift Container Platform web console to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. You are redirected to the Projects page. For non-administrative users, the default view is the Developer perspective. For cluster administrators, the default view is the Administrator perspective. If you do not have cluster-admin privileges, you will not see the Administrator perspective in your web console. The web console provides two perspectives: the Administrator perspective and Developer perspective. The Developer perspective provides workflows specific to the developer use cases. Figure 3.1. Perspective switcher Use the perspective switcher to switch to the Developer perspective. The Topology view with options to create an application is displayed. 3.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the +Add view, select Project Create Project . In the Name field, enter user-getting-started . Optional: In the Display name field, enter Getting Started with OpenShift . Note Display name and Description fields are optional. Click Create . You have created your first project on OpenShift Container Platform. Additional resources Default cluster roles Viewing a project using the web console Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You are logged in to the OpenShift Container Platform web console. You have a deployed image. You are in the Administrator perspective. Procedure Navigate to User Management and then click RoleBindings . Click Create binding . Select Namespace role binding (RoleBinding) . In the Name field, enter sa-user-account . In the Namespace field, search for and select user-getting-started . In the Role name field, search for view and select view . In the Subject field, select ServiceAccount . In the Subject namespace field, search for and select user-getting-started . In the Subject name field, enter default . Click Create . Additional resources Understanding authentication RBAC overview 3.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter the following: quay.io/openshiftroadshow/parksmap:latest Ensure that you have the current values for the following: Application: national-parks-app Name: parksmap Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=parksmap role=frontend Click Create . You are redirected to the Topology page where you can see the parksmap deployment in the national-parks-app application. Additional resources Creating applications using the Developer perspective Viewing a project using the web console Viewing the topology of your application Deleting a project using the web console 3.5.1. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. The Overview panel enables you to access many features of the parksmap deployment. The Details and Resources tabs enable you to scale application pods, check build status, services, and routes. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure Click D parksmap in the Topology view to open the Overview panel. Figure 3.2. Parksmap deployment The Overview panel includes tabs for Details , Resources , and Observe . The Details tab might be displayed by default. Table 3.1. Overview panel tab definitions Tab Defintion Details Enables you to scale your application and view pod configuration such as labels, annotations, and the status of the application. Resources Displays the resources that are associated with the deployment. Pods are the basic units of OpenShift Container Platform applications. You can see how many pods are being used, what their status is, and you can view the logs. Services that are created for your pod and assigned ports are listed under the Services heading. Routes enable external access to the pods and a URL is used to access them. Observe View various Events and Metrics information as it relates to your pod. Additional resources Interacting with applications and components Scaling application pods and checking builds and routes Labels and annotations used for the Topology view 3.5.2. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure In the Topology view, click the national-parks-app application. Click the Details tab. Use the up arrow to scale the pod to two instances. Figure 3.3. Scaling application Note Application scaling can happen quickly because OpenShift Container Platform is launching a new instance of an existing image. Use the down arrow to scale the pod down to one instance. Additional resources Recommended practices for scaling the cluster Understanding horizontal pod autoscalers About the Vertical Pod Autoscaler Operator 3.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service that is nationalparks . Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Import from Git to open a dialog. Enter the following URL in the Git Repo URL field: https://github.com/openshift-roadshow/nationalparks-py.git A builder image is automatically detected. Note If the detected builder image is Dockerfile, select Edit Import Strategy . Select Builder Image and then click Python . Scroll to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: nationalparks Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=nationalparks role=backend type=parksmap-backend Click Create . From the Topology view, select the nationalparks application. Note Click the Resources tab. In the Builds section, you can see your build running. Additional resources Adding services to your application Importing a codebase from Git to create an application Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter quay.io/centos7/mongodb-36-centos7 . In the Runtime icon field, search for mongodb . Scroll down to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: mongodb-nationalparks Select Deployment as the Resource . Unselect the checkbox to Create route to the application . In the Advanced Options section, click Deployment to add environment variables to add the following environment variables: Table 3.2. Environment variable names and values Name Value MONGODB_USER mongodb MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Additional resources Adding services to your application Viewing a project using the web console Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Developer perspective, navigate to Secrets on the left hand navigation and click Secrets . Click Create Key/value secret . In the Secret name field, enter nationalparks-mongodb-parameters . Enter the following values for Key and Value : Table 3.3. Secret keys and values Key Value MONGODB_USER mongodb DATABASE_SERVICE_NAME mongodb-nationalparks MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Click Add Secret to workload . From the drop down menu, select nationalparks as the workload to add. Click Save . This change in configuration triggers a new rollout of the nationalparks deployment with the environment variables properly injected. Additional resources Understanding secrets 3.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Before loading the data, add the proper labels to the mongodb-nationalparks and nationalparks deployment. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Topology view, navigate to nationalparks deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser and add the following at the end of the URL: /ws/data/load Example output Items inserted in database: 2893 From the Topology view, navigate to parksmap deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser to view your national parks across the world map. Figure 3.4. National parks across the world Additional resources Providing access permissions to your project using the Developer perspective Labels and annotations used for the Topology view | [
"/ws/data/load",
"Items inserted in database: 2893"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/getting_started/openshift-web-console |
28.5. Troubleshooting NVDIMM | 28.5. Troubleshooting NVDIMM 28.5.1. Monitoring NVDIMM Health Using S.M.A.R.T. Some NVDIMMs support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for retrieving health information. Monitor NVDIMM health regularly to prevent data loss. If S.M.A.R.T. reports problems with the health status of an NVDIMM, replace it as described in Section 28.5.2, "Detecting and Replacing a Broken NVDIMM" . Prerequisites On some systems, the acpi_ipmi driver must be loaded to retrieve health information using the following command: Procedure To access the health information, use the following command: 28.5.2. Detecting and Replacing a Broken NVDIMM If you find error messages related to NVDIMM reported in your system log or by S.M.A.R.T., it might mean an NVDIMM device is failing. In that case, it is necessary to: Detect which NVDIMM device is failing, Back up data stored on it, and Physically replace the device. Procedure 28.3. Detecting and Replacing a Broken NVDIMM To detect the broken DIMM, use the following command: The badblocks field shows which NVDIMM is broken. Note its name in the dev field. In the following example, the NVDIMM named nmem0 is broken: Example 28.1. Health Status of NVDIMM Devices Use the following command to find the phys_id attribute of the broken NVDIMM: From the example, you know that nmem0 is the broken NVDIMM. Therefore, find the phys_id attribute of nmem0 . In the following example, the phys_id is 0x10 : Example 28.2. The phys_id Attributes of NVDIMMs Use the following command to find the memory slot of the broken NVDIMM: In the output, find the entry where the Handle identifier matches the phys_id attribute of the broken NVDIMM. The Locator field lists the memory slot used by the broken NVDIMM. In the following example, the nmem0 device matches the 0x0010 identifier and uses the DIMM-XXX-YYYY memory slot: Example 28.3. NVDIMM Memory Slot Listing Back up all data in the namespaces on the NVDIMM. If you do not back up the data before replacing the NVDIMM, the data will be lost when you remove the NVDIMM from your system. Warning In some cases, such as when the NVDIMM is completely broken, the backup might fail. To prevent this, regularly monitor your NVDIMM devices using S.M.A.R.T. as described in Section 28.5.1, "Monitoring NVDIMM Health Using S.M.A.R.T." and replace failing NVDIMMs before they break. Use the following command to list the namespaces on the NVDIMM: In the following example, the nmem0 device contains the namespace0.0 and namespace0.2 namespaces, which you need to back up: Example 28.4. NVDIMM Namespaces Listing Replace the broken NVDIMM physically. | [
"modprobe acpi_ipmi",
"ndctl list --dimms --health { \"dev\":\" nmem0 \", \"id\":\" 802c-01-1513-b3009166 \", \"handle\": 1 , \"phys_id\": 22 , \"health\": { \"health_state\":\" ok \", \"temperature_celsius\": 25.000000 , \"spares_percentage\": 99 , \"alarm_temperature\": false , \"alarm_spares\": false , \"temperature_threshold\": 50.000000 , \"spares_threshold\": 20 , \"life_used_percentage\": 1 , \"shutdown_state\":\" clean \" } }",
"ndctl list --dimms --regions --health --media-errors --human",
"ndctl list --dimms --regions --health --media-errors --human \"regions\":[ { \"dev\":\"region0\", \"size\":\"250.00 GiB (268.44 GB)\", \"available_size\":0, \"type\":\"pmem\", \"numa_node\":0, \"iset_id\":\"0xXXXXXXXXXXXXXXXX\", \"mappings\":[ { \"dimm\":\"nmem1\", \"offset\":\"0x10000000\", \"length\":\"0x1f40000000\", \"position\":1 }, { \"dimm\":\"nmem0\", \"offset\":\"0x10000000\", \"length\":\"0x1f40000000\", \"position\":0 } ], \"badblock_count\":1, \"badblocks\":[ { \"offset\":65536, \"length\":1, \"dimms\":[ \"nmem0\" ] } ] , \"persistence_domain\":\"memory_controller\" } ] }",
"ndctl list --dimms --human",
"ndctl list --dimms --human [ { \"dev\":\"nmem1\", \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x120\", \"phys_id\":\"0x1c\" }, { \"dev\":\"nmem0\" , \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x20\", \"phys_id\":\"0x10\" , \"flag_failed_flush\":true, \"flag_smart_event\":true } ]",
"dmidecode",
"dmidecode Handle 0x0010, DMI type 17, 40 bytes Memory Device Array Handle: 0x0004 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 125 GB Form Factor: DIMM Set: 1 Locator: DIMM-XXX-YYYY Bank Locator: Bank0 Type: Other Type Detail: Non-Volatile Registered (Buffered)",
"ndctl list --namespaces --dimm= DIMM-ID-number",
"ndctl list --namespaces --dimm=0 [ { \"dev\":\"namespace0.2\" , \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0.2s\", \"numa_node\":0 }, { \"dev\":\"namespace0.0\" , \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0s\", \"numa_node\":0 } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/troubleshooting-nvdimm |
Chapter 2. Prerequisites | Chapter 2. Prerequisites The Assisted Installer validates the following prerequisites to ensure successful installation. If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function. 2.1. Supported CPU architectures The Assisted Installer is supported on the following CPU architectures: x86_64 arm64 ppc64le (IBM Power(R)) s390x (IBM Z(R)) 2.2. Supported drive types This section lists the installation drive types that you can and cannot use when installing Red Hat OpenShift Container Platform with the Assisted Installer. Supported drive types The table below shows the installation drive types supported for the different OpenShift Container Platform versions and CPU architectures. Drive types RHOCP Version Supported CPU Architectures Comments HDD All All A hard disk drive. SSD All All An SSD or NVMe drive. Multipath All All A Linux multipath device that can aggregate paths for Fibre Channel (FC), iSCSI, or other protocols. Currently, the Assisted Installer only supports Fibre Channel multipaths. FC (Fibre Channel) All s390x, x86_64 Indicates a single path Fibre Channel (FC). Supported only for s390x. For other architectures, use multipath to enhance availability and performance. iSCSI 4.15 and later x86_64 The system supports iSCSI boot volumes through iPXE boot. Multipath for iSCSI is not currently supported in OpenShift Container Platform installations using Assisted Installer. A minimal ISO image is mandatory on iSCSI boot volumes. Using a full ISO image will result in an error. iSCSI boot requires two machine network interfaces; one for the iSCSI traffic and the other for the OpenShift Container Platform cluster installation. A static IP address is not supported when using iSCSI boot volumes. RAID 4.14 and later All A software RAID drive. The RAID should be configured via BIOS/UEFI. If this option is unavailable, you can configure OpenShift Container Platform to mirror the drives. For details, see Encrypting and mirroring disks during installation . ECK All s390x IBM drive. ECKD (ESE) All s390x IBM drive. FBA All s390x IBM drive. Unsupported drive types The table below shows the installation drive types that are not supported. Drive types Comments Unknown The system could not detect the drive type. FDD A floppy disk drive. ODD An optical disk drive (e.g., CD-ROM). Virtual A loopback device. LVM A Linux Logical Volume Management drive. 2.3. Resource requirements This section describes the resource requirements for different clusters and installation options. The multicluster engine for Kubernetes requires additional resources. If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also assign additional resources to each node. 2.3.1. Multi-node cluster resource requirements The resource requirements of a multi-node cluster depend on the installation options. Multi-node cluster basic installation Control plane nodes: 4 CPU cores 16 GB RAM 100 GB storage Note The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP . Compute nodes: 2 CPU cores 8 GB RAM 100 GB storage Multi-node cluster + multicluster engine Additional 4 CPU cores Additional 16 GB RAM Note If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation. Multi-node cluster + multicluster engine + OpenShift Data Foundation or LVM Storage Additional 75 GB storage 2.3.2. Single-node OpenShift resource requirements The resource requirements for single-node OpenShift depend on the installation options. Single-node OpenShift basic installation 8 CPU cores 16 GB RAM 100 GB storage Single-node OpenShift + multicluster engine Additional 8 CPU cores Additional 32 GB RAM Note If you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled. Single-node OpenShift + multicluster engine + OpenShift Data Foundation or LVM Storage Additional 95 GB storage 2.4. Networking requirements For hosts of type VMware , set clusterSet disk.EnableUUID to TRUE , even when the platform is not vSphere. 2.4.1. General networking requirements The network must meet the following requirements: You have configured a DHCP server or static IP addressing. You have opened port 6443 to allow the API URL access to the cluster using the oc CLI tool when outside the firewall. You have opened port 443 to allow console access outside the firewall. Port 443 is also used for all ingress traffic. You have configured DNS to connect to the cluster API or ingress endpoints from outside the cluster. Optional: You have created a DNS Pointer record (PTR) for each node in the cluster if using static IP addressing. Note You must create a DNS PTR record to boot with a preset hostname if the hostname will not come from another source ( /etc/hosts or DHCP). Otherwise, the Assisted Installer's automatic node renaming feature will rename the nodes to their network interface MAC address. 2.4.2. External DNS Installing multi-node cluster with user-managed networking requires external DNS. External DNS is not required to install multi-node clusters with cluster-managed networking or Single-node OpenShift with Assisted Installer. Configure external DNS after installation to connect to the cluster from an external source. External DNS requires the creation of the following record types: A/AAAA record for api.<cluster_name>.<base_domain>. A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain>. A/AAAA record for each node in the cluster. Important Do not create a wildcard, such as *.<cluster_name>.<base_domain>, or the installation will not proceed. A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the newly created DNS Records are resolving before installation to prevent installation delays. For DNS record examples, see Example DNS configuration . The OpenShift Container Platform cluster's network must also meet the following requirements: Connectivity between all cluster nodes Connectivity for each node to the internet Access to an NTP server for time synchronization between the cluster nodes 2.4.2.1. Example DNS configuration The following DNS configuration provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to recommend one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . 2.4.2.2. Example DNS A record configuration The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer. Example DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.1 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 4 control-plane1.ocp4.example.com. IN A 192.168.1.98 control-plane2.ocp4.example.com. IN A 192.168.1.99 ; worker0.ocp4.example.com. IN A 192.168.1.11 5 worker1.ocp4.example.com. IN A 192.168.1.7 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the worker machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the control plane machines. 5 Provides name resolution for the worker machines. 2.4.2.3. Example DNS PTR record configuration The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer. Example DNS zone database for reverse records USDUSDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 4 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the control plane machines. 4 Provides reverse DNS resolution for the worker machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.4.3. Networking requirements for IBM Z In IBM Z(R) environments, advanced networking technologies like Original Storage Architecture (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard settings used in Assisted Installer deployments. These overrides are necessary to accommodate their unique requirements and ensure a successful and efficient deployment on IBM Z(R). The following table lists the network devices that are supported for the network configuration override functionality: Network device z/VM KVM LPAR Classic LPAR Dynamic Partition Manager (DPM) Original Storage Architecture (OSA) virtual switch Not supported - Not supported Not supported Direct attached OSA Supported Only through a Linux bridge Supported Not supported RDMA over Converged Ethernet (RoCE) Not supported Only through a Linux bridge Not supported Not supported HiperSockets Supported Only through a Linux bridge Supported Not supported Linux bridge Not supported Supported Not supported Not supported 2.4.3.1. Configuring network overrides in IBM Z You can specify a static IP address on IBM Z(R) machines that uses Logical Partition (LPAR) and z/VM. This is specially useful when the network devices do not have a static MAC address assigned to them. If you have an existing .parm file, edit it to include the following entry: ai.ip_cfg_override=1 This parameter allows the file to add the network settings to the CoreOS installer. Example of the .parm file rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> 1 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> 2 rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 4 ignition.firstboot ignition.platform.id=metal random.trust_cpu=on 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported. 2 For installations on direct access storage devices (DASD) type disks, use rd. to specify the DASD where Red Hat Enterprise Linux (RHEL) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHEL is to be installed. 3 Specify values for adapter , wwpn , and lun as in the following example: rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000 . 4 Specify this parameter when using an OSA network adapter or HiperSockets. Note The override parameter overrides the host's network configuration settings. 2.5. Preflight validations The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations: Ensures network connectivity Ensures sufficient network bandwidth Ensures connectivity to the registry Ensures that any upstream DNS can resolve the required domain name Ensures time synchronization between cluster nodes Verifies that the cluster nodes meet the minimum hardware requirements Validates the installation configuration parameters If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed. | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.1 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 4 control-plane1.ocp4.example.com. IN A 192.168.1.98 control-plane2.ocp4.example.com. IN A 192.168.1.99 ; worker0.ocp4.example.com. IN A 192.168.1.11 5 worker1.ocp4.example.com. IN A 192.168.1.7 ; ;EOF",
"USDUSDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 4 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. ; ;EOF",
"ai.ip_cfg_override=1",
"rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> 1 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> 2 rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 4 ignition.firstboot ignition.platform.id=metal random.trust_cpu=on"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/prerequisites |
Preface | Preface For OpenShift Data Foundation, node replacement can be performed proactively for an operational node and reactively for a failed node for the following deployments: For Amazon Web Services (AWS) User-provisioned infrastructure Installer-provisioned infrastructure For VMware User-provisioned infrastructure Installer-provisioned infrastructure For Red Hat Virtualization Installer-provisioned infrastructure For Microsoft Azure Installer-provisioned infrastructure For local storage devices Bare metal VMware Red Hat Virtualization IBM Power For replacing your storage nodes in external mode, see Red Hat Ceph Storage documentation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_nodes/preface-replacing-nodes |
28.5. Configuring Centralized Crash Collection | 28.5. Configuring Centralized Crash Collection You can set up ABRT so that crash reports are collected from multiple systems and sent to a dedicated system for further processing. This is useful when an administrator does not want to log into hundreds of systems and manually check for crashes found by ABRT . In order to use this method, you need to install the libreport-plugin-reportuploader plug-in ( yum install libreport-plugin-reportuploader ). See the following sections on how to configure systems to use ABRT's centralized crash collection. 28.5.1. Configuration Steps Required on a Dedicated System Complete the following steps on a dedicated (server) system: Create a directory to which you want the crash reports to be uploaded to. Usually, /var/spool/abrt-upload/ is used (the rest of the document assumes you are using this directory). Make sure this directory is writable by the abrt user. Note When the abrt-desktop package is installed, it creates a new system user and a group, both named abrt . This user is used by the abrtd daemon, for example, as the owner:group of /var/spool/abrt/* directories. In the /etc/abrt/abrt.conf configuration file, set the WatchCrashdumpArchiveDir directive to the following: Choose your preferred upload mechanism; for example, FTP or SCP . For more information on how to configure FTP , see Section 21.2, "FTP" . For more information on how to configure SCP , see Section 14.4.2, "Using the scp Utility" . It is advisable to check whether your upload method works. For example, if you use FTP , upload a file using an interactive FTP client: Check whether testfile appeared in the correct directory on the server system. The MaxCrashReportsSize directive (in the /etc/abrt/abrt.conf configuration file) needs to be set to a larger value if the expected volume of crash data is larger than the default 1000 MB. Consider whether you would like to generate a backtrace of C/C++ crashes. You can disable backtrace generation on the server if you do not want to generate backtraces at all, or if you decide to create them locally on the machine where a problem occurred. In the standard ABRT installation, a backtrace of a C/C++ crash is generated using the following rule in the /etc/libreport/events.d/ccpp_events.conf configuration file: EVENT=analyze_LocalGDB analyzer=CCpp abrt-action-analyze-core.py --core=coredump -o build_ids && abrt-action-install-debuginfo-to-abrt-cache --size_mb=4096 && abrt-action-generate-backtrace && abrt-action-analyze-backtrace You can ensure that this rule is not applied for uploaded problem data by adding the remote!=1 condition to the rule. Decide whether you want to collect package information (the package and the component elements) in the problem data. See Section 28.5.3, "Saving Package Information" to find out whether you need to collect package information in your centralized crash collection configuration and how to configure it properly. | [
"WatchCrashdumpArchiveDir = /var/spool/abrt-upload/",
"~]USD ftp ftp> open servername Name: username Password: password ftp> cd /var/spool/abrt-upload 250 Operation successful ftp> put testfile ftp> quit",
"EVENT=analyze_LocalGDB analyzer=CCpp abrt-action-analyze-core.py --core=coredump -o build_ids && abrt-action-install-debuginfo-to-abrt-cache --size_mb=4096 && abrt-action-generate-backtrace && abrt-action-analyze-backtrace"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-centralized_crash_collection |
Chapter 4. PolicyKit | Chapter 4. PolicyKit The PolicyKit utility is a framework that provides an authorization API used by privileged programs (also called mechanisms ) offering services to unprivileged programs (also called subjects ). The following are details on the changes PolicyKit , or its system name polkit , has undergone. 4.1. Policy Configuration As far as the new features are concerned, authorization rules are now defined in JavaScript .rules files. This means that the same files are used for defining both the rules and the administrator status. Previously, this information was stored in two different file types - *.pkla and *.conf , which used key/value pairs to define additional local authorizations. These new .rules files are stored in two locations; whereas polkit rules for local customization are stored in the /etc/polkit-1/rules.d/ directory, the third party packages are stored in /usr/share/polkit-1/rules.d/ . The existing .conf and .pkla configuration files have been preserved and exist side by side with .rules files. polkit has been upgraded for Red Hat Enterprise Linux 7 with the compatibility issue in mind. The logic in precedence in rules has changed. polkitd now reads .rules files in lexicographic order from the /etc/polkit-1/rules.d and /usr/share/polkit-1/rules.d directories. If two files are named identically, files in /etc are processed before files in /usr . In addition, existing rules are applied by the /etc/polkit-1/rules.d/49-polkit-pkla-compat.rules file. They can therefore be overridden by .rules files in either /usr or /etc with a name that comes before 49-polkit-pkla-compat in lexicographic order. The simplest way to ensure that your old rules are not overridden is to begin the name of all other .rules files with a number higher than 49. Here is an example of a .rules file. It creates a rule that allows mounting a file system on a system device for the storage group. The rule is stored in the /etc/polkit-1/rules.d/10-enable-mount.rules file: Example 4.1. Allow Mounting a File system on a System device polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.udisks2.filesystem-mount-system" && subject.isInGroup("storage")) { return polkit.Result.YES; } }); For more information, see: polkit (8) - The man page for the description of the JavaScript rules and the precedence rules. pkla-admin-identities (8) and pkla-check-authorization (8) - The man pages for documentation of the .conf and .pkla file formats, respectively. | [
"polkit.addRule(function(action, subject) { if (action.id == \"org.freedesktop.udisks2.filesystem-mount-system\" && subject.isInGroup(\"storage\")) { return polkit.Result.YES; } });"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/policykit |
probe::nfs.fop.lock | probe::nfs.fop.lock Name probe::nfs.fop.lock - NFS client file lock operation Synopsis nfs.fop.lock Values fl_start starting offset of locked region ino inode number fl_flag lock flags i_mode file type and access rights dev device identifier fl_end ending offset of locked region fl_type lock type cmd cmd arguments | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-lock |
Chapter 2. Console monitoring and alerting | Chapter 2. Console monitoring and alerting Red Hat Quay provides support for monitoring instances that were deployed by using the Red Hat Quay Operator, from inside the OpenShift Container Platform console. The new monitoring features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods. Note To enable the monitoring features, you must select All namespaces on the cluster as the installation mode when installing the Red Hat Quay Operator. 2.1. Dashboard On the OpenShift Container Platform console, click Monitoring Dashboards and search for the dashboard of your desired Red Hat Quay registry instance: The dashboard shows various statistics including the following: The number of Organizations , Repositories , Users , and Robot accounts CPU Usage Max memory usage Rates of pulls and pushes, and authentication requests API request rate Latencies 2.2. Metrics You can see the underlying metrics behind the Red Hat Quay dashboard by accessing Monitoring Metrics in the UI. In the Expression field, enter the text quay_ to see the list of metrics available: Select a sample metric, for example, quay_org_rows : This metric shows the number of organizations in the registry. It is also directly surfaced in the dashboard. 2.3. Alerting An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from Monitoring Alerting in the console UI and searching for the Quay-specific alert: Select the QuayPodFrequentlyRestarting rule detail to configure the alert: | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/operator-console-monitoring-alerting |
Appendix B. Contact information | Appendix B. Contact information Red Hat Process Automation Manager documentation team: [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/author-group |
Chapter 49. Mask Fields Action | Chapter 49. Mask Fields Action Mask fields with a constant value in the message in transit 49.1. Configuration Options The following table summarizes the configuration options available for the mask-field-action Kamelet: Property Name Description Type Default Example fields * Fields Comma separated list of fields to mask string replacement * Replacement Replacement for the fields to be masked string Note Fields marked with an asterisk (*) are mandatory. 49.2. Dependencies At runtime, the mask-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:jackson camel:kamelet camel:core 49.3. Usage This section describes how you can use the mask-field-action . 49.3.1. Knative Action You can use the mask-field-action Kamelet as an intermediate step in a Knative binding. mask-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: "The Fields" replacement: "The Replacement" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 49.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 49.3.1.2. Procedure for using the cluster CLI Save the mask-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f mask-field-action-binding.yaml 49.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 49.3.2. Kafka Action You can use the mask-field-action Kamelet as an intermediate step in a Kafka binding. mask-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: "The Fields" replacement: "The Replacement" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 49.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 49.3.2.2. Procedure for using the cluster CLI Save the mask-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f mask-field-action-binding.yaml 49.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 49.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mask-field-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: \"The Fields\" replacement: \"The Replacement\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f mask-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step mask-field-action -p \"step-0.fields=The Fields\" -p \"step-0.replacement=The Replacement\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: \"The Fields\" replacement: \"The Replacement\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f mask-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step mask-field-action -p \"step-0.fields=The Fields\" -p \"step-0.replacement=The Replacement\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/mask-field-action |
8.9 Release Notes | 8.9 Release Notes Red Hat Enterprise Linux 8.9 Release Notes for Red Hat Enterprise Linux 8.9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/index |
8.123. NetworkManager | 8.123. NetworkManager 8.123.1. RHBA-2013:1670 - NetworkManager bug fix and enhancement update Updated NetworkManager packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. NetworkManager is a system network service that manages network devices and connections, attempting to keep network connectivity active when available. It manages Ethernet, Wi-Fi, mobile broadband ( WWAN ), and PPPoE (Point-to-Point Protocol over Ethernet) devices, and provides integration with a variety of VPN services. Bug Fixes BZ# 922558 Previously, NetworkManager did not explicitly request static routes from DHCP (Dynamic Host Configuration Protocol) servers, and thus some servers would not deliver those routes. With this update, NetworkManager now requests static routes from DHCP servers when available. BZ# 701381 Previously, it was impossible for some users to check Enable Wireless box in NetworkManager as the field was unresponsive. Moreover, the Enable Wireless connection option was unavailable in NetworkManager after hardware was disabled and enabled again. With this update, users can turn on the wireless connection from the GUI after their hardware is reenabled. BZ# 1008884 When running the NetworkManager applet in some Virtual Machine (VM) configurations, left-clicking on the icon could cause the applet to terminate unexpectedly. This bug has been fixed and the applet no longer crashes in these configurations. BZ# 923648 Previously, bridge and bond connections created through the NetworkManager connection editor ( nm-connection-editor ) were not set to connect automatically, and thus had to be manually started. With this update, these connections automatically start when created by default. BZ# 896198 A GATEWAY setting in the /etc/sysconfig/network file caused NetworkManager to assign that GATEWAY to all interfaces with static IP addresses. This scenario took place even if no GATEWAY or a different one was specified for these addresses. To fix this bug, if GATEWAY is given in /etc/sysconfig/network , only configurations with a matching gateway address will be given the default route. Alternatively, the DEFROUTE=yes/no option may be used in individual configuration files to allow or deny the default route on a per-configuration basis. BZ# 836993 Previously, when using the vpnc program via NetworkManager with token out of synchronization, the server prompted for a token. However, NetworkManager misinterpreted this response and reported a failed connection. With this update, a new prompt for token code has been added to the NetworkManager-vpnc utility, thus fixing the bug. BZ# 991341 Prior to this update, on receipt of an IPv6 Router Advertisement, NetworkManager attempted to replace the IPv6 default route which the kernel had added. Consequently, the kernel returned the following failure message: To fix this bug, NetworkManager no longer replaces an IPv6 default route added by the kernel. BZ# 758076 Previously, it was not possible to choose Certificate Authority (CA) certificate via the "Choose certificate" dialog window in nm-connection-editor . This was confusing for the user. The dialog checkbox information has been replaced with a more informative text, thus fixing the bug. BZ# 919242 Previously, when NetworkManager was not allowed to manage bridge, bond, or VLAN interfaces due to the missing NM_BOND_BRIDGE_VLAN_ENABLED option in the /etc/sysconfig/network file, the NetworkManager connection editor ( nm-connection-editor ) still allowed the user to create these types of network connections. The editor now warns the user when unusable connections have been created, thus fixing the bug. BZ# 915480 Previously, the NetworkManager GUI applet (nm-applet) did not show bridge, bond, or VLAN interfaces in the menu. With this update, the nm-applet has been enhanced to show all available bond, bridge, and VLAN interfaces that are configured but not yet created. BZ# 905532 Due to some missing ignored options for bonding interfaces, the /sys/class/net/bond0/bonding/primary file was empty during installation. In addition, the network traffic went through eth0 during installation. This bug has been fixed and NetworkManager now supports a much larger set of bond interface options. BZ# 953076 Previously, in some cases, NetworkManager was unable to set the mode of a bond master interface. A patch has been provided to fix this bug and the mode setting now changes according to nm-editor alterations. BZ# 953123 Previously, the NetworkManager connection editor ( nm-connection-editor ) did not allow setting the cloned MAC address for VLAN interfaces. A patch has been provided to fix this bug and nm-connection-editor now works as expected. BZ# 969363 Prior to this update, the manual page of nm-online did not describe the correct usage of nm-online parameters, such as the -t option. The manual page has been updated to describe the usage of its parameters correctly. BZ# 973245 Previously, NetworkManager wrote and saved only connection types compatible with standard ifcfg network configuration files. This bug has been fixed and other connection types like Bluetooth, WWAN , can now be saved as keyfiles in the /etc/NetworkManager/system-connections/ directory. BZ# 902372 Previously, when taking control of an existing bridge, NetworkManager did not ensure a clean bridge state. With this update, NetworkManager resets bridge options and removes all bridge ports, which ensures clean bridge state on start-up with bridging support enabled. BZ# 867273 After configuring the IP-over-InfiniBand ( IPoIB ) profile on machine with an InfiniBand ( IB ) device, the profile was not connected. This bug has been fixed and IP-over-Infiniband (IPoIB) network configurations are now listed in the network applet menu. BZ# 713975 After changing the authentication or inner authentication drop-down menus in the configuration for a new wireless network connection, the "Ask for this password every time" checkbox kept resetting. To fix this bug, the updated NetworkManager GUI applet saves the value of the checkbox when connecting to WPA Enterprise networks. BZ# 906133 Prior to this update, an Ad-Hoc WiFi network failed to start when its BSSID (Basic Service Set Identifier) was specified, due to kernel restrictions. To fix this bug, the NetworkManager connection editor ( nm-connection-editor ) disallows setting the BSSID for ad-Hoc WiFi connections, since this value is automatically chosen by the kernel. Enhancements BZ# 602265 With this update, NetworkManager has been enhanced to support the creation and management of Point-to-point Protocol over Ethernet ( PPPoE ) based connections. NetworkManager now waits a short period of time before reconnecting a PPPoE connection to ensure the peer is ready. BZ# 694789 A new GATEWAY_PING_TIMEOUT configuration option has been added. This new option ensures that NetworkManager waits for a successful ping of the gateway before indicating network connectivity. BZ# 990310 NetworkManager now reads ifcfg alias files and assigns the addresses in them to their master interface, using the alias name as the address label. BZ# 564467 , BZ# 564465 Manual pages for nm-connection-editor and nm-applet utilities have been created. Users of NetworkManager are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"'ICMPv6 RA: ndisc_router_discovery() failed to add default route.'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/networkmanager |
7.81. hsqldb | 7.81. hsqldb 7.81.1. RHBA-2013:0334 - hsqldb bug fix update Updated hsqldb packages that fix one bug are now available for Red Hat Enterprise Linux 6. The hsqldb packages provide a relational database management system written in Java. The Hyper Structured Query Language Database (HSQLDB) contains a JDBC driver to support a subset of ANSI-92 SQL. Bug Fix BZ# 827343 Prior to this update, the hsqldb database did not depend on java packages of version 1:1.6.0 or later. As a consequence, the build-classpath command failed on systems without the java-1.6.0-openjdk package installed and the hsqldb packages could be installed incorrectly. This update adds a requirement for java-1.6.0-openjdk. Now, the installation of hsqldb proceeds correctly as expected. All users of hsqldb are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/hsqldb |
Chapter 23. Subscription management | Chapter 23. Subscription management The following chapter contains the most notable changes to subscription management between RHEL 8 and RHEL 9. 23.1. Notable changes to subscription management Merged system purpose commands under the subscription-manager syspurpose command Previously, there were two different commands to set system purpose attributes; syspurpose and subscription-manager . To unify all the system purpose attributes under one module, all the addons , role , service-level , and usage commands from subscription-manager have been moved to the new submodule, subscription-manager syspurpose . Existing subscription-manager commands outside the new submodule are deprecated. The separate package ( python3-syspurpose ) that provides the syspurpose command line tool has been removed in RHEL 9. This update provides a consistent way to view, set, and update all system purpose attributes using a single command of subscription-manager. This command replaces all the existing system purpose commands with their equivalent versions available as a new subcommand. For example, subscription-manager role --set SystemRole becomes subscription-manager syspurpose role --set SystemRole and so on. For complete information about the new commands, options, and other attributes, see the SYSPURPOSE OPTIONS section in the subscription-manager man page or Configuring system purpose using the subscription manager command line tool . virt-who now uses /etc/virt-who.conf for global options instead of /etc/sysconfig/virt-who In RHEL 9, the global options for the virt-who utility on your system are stored in the /etc/virt-who.conf file. Therefore, the /etc/sysconfig/virt-who file is not being used any more, and has been removed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_subscription-management_considerations-in-adopting-rhel-9 |
Chapter 374. XML Security Component | Chapter 374. XML Security Component Available as of Camel version 2.12 With this Apache Camel component, you can generate and validate XML signatures as described in the W3C standard XML Signature Syntax and Processing or as described in the successor version 1.1 . For XML Encryption support, please refer to the XML Security Data Format . You can find an introduction to XML signature here . The implementation of the component is based on JSR 105 , the Java API corresponding to the W3C standard and supports the Apache Santuario and the JDK provider for JSR 105. The implementation will first try to use the Apache Santuario provider; if it does not find the Santuario provider, it will use the JDK provider. Further, the implementation is DOM based. Since Camel 2.15.0 we also provide support for XAdES-BES/EPES for the signer endpoint; see subsection "XAdES-BES/EPES for the Signer Endpoint". Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmlsecurity</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 374.1. XML Signature Wrapping Modes XML Signature differs between enveloped, enveloping, and detached XML signature. In the enveloped XML signature case, the XML Signature is wrapped by the signed XML Document; which means that the XML signature element is a child element of a parent element, which belongs to the signed XML Document. In the enveloping XML signature case, the XML Signature contains the signed content. All other cases are called detached XML signatures. A certain form of detached XML signature is supported since 2.14.0 . In the enveloped XML signature case, the supported generated XML signature has the following structure (Variables are surrounded by [] ). <[parent element]> ... <!-- Signature element is added as last child of the parent element--> <Signature Id="generated_unique_signature_id"> <SignedInfo> <Reference URI=""> <Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> (<Transform>)* <!-- By default "http://www.w3.org/2006/12/xml-c14n11" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI="#[keyinfo_Id]"> <Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <DigestMethod> <DigestValue> </Reference>)? <!-- further references possible, see option 'properties' below --> </SignedInfo> <SignatureValue> (<KeyInfo Id="[keyinfo_id]">)? <!-- Object elements possible, see option 'properties' below --> </Signature> </[parent element]> In the enveloping XML signature case, the supported generated XML signature has the structure: <Signature Id="generated_unique_signature_id"> <SignedInfo> <Reference URI="#generated_unique_object_id" type="[optional_type_value]"> (<Transform>)* <!-- By default "http://www.w3.org/2006/12/xml-c14n11" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI="#[keyinfo_id]"> <Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <DigestMethod> <DigestValue> </Reference>)? <!-- further references possible, see option 'properties' below --> </SignedInfo> <SignatureValue> (<KeyInfo Id="[keyinfo_id]">)? <Object Id="generated_unique_object_id"/> <!-- The Object element contains the in-message body; the object ID can either be generated or set by the option parameter "contentObjectId" --> <!-- Further Object elements possible, see option 'properties' below --> </Signature> As of 2.14.0 detached XML signatures with the following structure are supported (see also sub-chapter XML Signatures as Siblings of Signed Elements): (<[signed element] Id="[id_value]"> <!-- signed element must have an attribute of type ID --> ... </[signed element]> <other sibling/>* <!-- between the signed element and the corresponding signature element, there can be other siblings. Signature element is added as last sibling. --> <Signature Id="generated_unique_ID"> <SignedInfo> <CanonicalizationMethod> <SignatureMethod> <Reference URI="#[id_value]" type="[optional_type_value]"> <!-- reference URI contains the ID attribute value of the signed element --> (<Transform>)* <!-- By default "http://www.w3.org/2006/12/xml-c14n11" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI="#[generated_keyinfo_Id]"> <Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <DigestMethod> <DigestValue> </Reference>)? </SignedInfo> <SignatureValue> (<KeyInfo Id="[generated_keyinfo_id]">)? </Signature>)+ 374.2. URI Format The camel component consists of two endpoints which have the following URI format: With the signer endpoint, you can generate a XML signature for the body of the in-message which can be either a XML document or a plain text. The enveloped, enveloping, or detached (as of 12.14) XML signature(s) will be set to the body of the out-message. With the verifier endpoint, you can validate an enveloped or enveloping XML signature or even several detached (as of 2.14.0) XML signatures contained in the body of the in-message; if the validation is successful, then the original content is extracted from the XML signature and set to the body of the out-message. The name part in the URI can be chosen by the user to distinguish between different signer/verifier endpoints within the camel context. 374.3. Basic Example The following example shows the basic usage of the component. from("direct:enveloping").to("xmlsecurity:sign://enveloping?keyAccessor=#accessor", "xmlsecurity:verify://enveloping?keySelector=#selector", "mock:result") In Spring XML: <from uri="direct:enveloping" /> <to uri="xmlsecurity:sign://enveloping?keyAccessor=#accessor" /> <to uri="xmlsecurity:verify://enveloping?keySelector=#selector" /> <to uri="mock:result" /> For the signing process, a private key is necessary. You specify a key accessor bean which provides this private key. For the validation, the corresponding public key is necessary; you specify a key selector bean which provides this public key. The key accessor bean must implement the KeyAccessor interface. The package org.apache.camel.component.xmlsecurity.api contains the default implementation class DefaultKeyAccessor which reads the private key from a Java keystore. The key selector bean must implement the javax.xml.crypto.KeySelector interface. The package org.apache.camel.component.xmlsecurity.api contains the default implementation class DefaultKeySelector which reads the public key from a keystore. In the example, the default signature algorithm http://www.w3.org/2000/09/xmldsig#rsa-sha1 is used. You can set the signature algorithm of your choice by the option signatureAlgorithm (see below). The signer endpoint creates an enveloping XML signature. If you want to create an enveloped XML signature then you must specify the parent element of the Signature element; see option parentLocalName for more details. For creating detached XML signatures, see sub-chapter "Detached XML Signatures as Siblings of the Signed Elements". 374.4. Component Options The XML Security component supports 3 options, which are listed below. Name Description Default Type signerConfiguration (advanced) To use a shared XmlSignerConfiguration configuration to use as base for configuring endpoints. XmlSignerConfiguration verifierConfiguration (advanced) To use a shared XmlVerifierConfiguration configuration to use as base for configuring endpoints. XmlVerifier Configuration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 374.5. Endpoint Options The XML Security endpoint is configured using URI syntax: with the following path and query parameters: 374.5.1. Path Parameters (2 parameters): Name Description Default Type command Required Whether to sign or verify. XmlCommand name Required The name part in the URI can be chosen by the user to distinguish between different signer/verifier endpoints within the camel context. String 374.5.2. Query Parameters (35 parameters): Name Description Default Type baseUri (common) You can set a base URI which is used in the URI dereferencing. Relative URIs are then concatenated with the base URI. String clearHeaders (common) Determines if the XML signature specific headers be cleared after signing and verification. Defaults to true. true Boolean cryptoContextProperties (common) Sets the crypto context properties. See link XMLCryptoContext#setProperty(String, Object). Possible properties are defined in XMLSignContext an XMLValidateContext (see Supported Properties). The following properties are set by default to the value Boolean#TRUE for the XML validation. If you want to switch these features off you must set the property value to Boolean#FALSE. org.jcp.xml.dsig.validateManifests javax.xml.crypto.dsig.cacheReference Map disallowDoctypeDecl (common) Disallows that the incoming XML document contains DTD DOCTYPE declaration. The default value is Boolean#TRUE. true Boolean omitXmlDeclaration (common) Indicator whether the XML declaration in the outgoing message body should be omitted. Default value is false. Can be overwritten by the header XmlSignatureConstants#HEADER_OMIT_XML_DECLARATION. false Boolean outputXmlEncoding (common) The character encoding of the resulting signed XML document. If null then the encoding of the original XML document is used. String schemaResourceUri (common) Classpath to the XML Schema. Must be specified in the detached XML Signature case for determining the ID attributes, might be set in the enveloped and enveloping case. If set, then the XML document is validated with the specified XML schema. The schema resource URI can be overwritten by the header XmlSignatureConstants#HEADER_SCHEMA_RESOURCE_URI. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean uriDereferencer (advanced) If you want to restrict the remote access via reference URIs, you can set an own dereferencer. Optional parameter. If not set the provider default dereferencer is used which can resolve URI fragments, HTTP, file and XPpointer URIs. Attention: The implementation is provider dependent! URIDereferencer addKeyInfoReference (sign) In order to protect the KeyInfo element from tampering you can add a reference to the signed info element so that it is protected via the signature value. The default value is true. Only relevant when a KeyInfo is returned by KeyAccessor. and KeyInfo#getId() is not null. true Boolean canonicalizationMethod (sign) Canonicalization method used to canonicalize the SignedInfo element before the digest is calculated. You can use the helper methods XmlSignatureHelper.getCanonicalizationMethod(String algorithm) or getCanonicalizationMethod(String algorithm, List inclusiveNamespacePrefixes) to create a canonicalization method. http://www.w3.org/TR/2001/REC-xml-c14n-20010315 AlgorithmMethod contentObjectId (sign) Sets the content object Id attribute value. By default a UUID is generated. If you set the null value, then a new UUID will be generated. Only used in the enveloping case. String contentReferenceType (sign) Type of the content reference. The default value is null. This value can be overwritten by the header XmlSignatureConstants#HEADER_CONTENT_REFERENCE_TYPE. String contentReferenceUri (sign) Reference URI for the content to be signed. Only used in the enveloped case. If the reference URI contains an ID attribute value, then the resource schema URI ( setSchemaResourceUri(String)) must also be set because the schema validator will then find out which attributes are ID attributes. Will be ignored in the enveloping or detached case. String digestAlgorithm (sign) Digest algorithm URI. Optional parameter. This digest algorithm is used for calculating the digest of the input message. If this digest algorithm is not specified then the digest algorithm is calculated from the signature algorithm. Example: http://www.w3.org/2001/04/xmlenc#sha256 String keyAccessor (sign) For the signing process, a private key is necessary. You specify a key accessor bean which provides this private key. The key accessor bean must implement the KeyAccessor interface. The package org.apache.camel.component.xmlsecurity.api contains the default implementation class DefaultKeyAccessor which reads the private key from a Java keystore. KeyAccessor parentLocalName (sign) Local name of the parent element to which the XML signature element will be added. Only relevant for enveloped XML signature. Alternatively you can also use setParentXpath(XPathFilterParameterSpec). Default value is null. The value must be null for enveloping and detached XML signature. This parameter or the parameter setParentXpath(XPathFilterParameterSpec) for enveloped signature and the parameter setXpathsToIdAttributes(List) for detached signature must not be set in the same configuration. If the parameters parentXpath and parentLocalName are specified in the same configuration then an exception is thrown. String parentNamespace (sign) Namespace of the parent element to which the XML signature element will be added. String parentXpath (sign) Sets the XPath to find the parent node in the enveloped case. Either you specify the parent node via this method or the local name and namespace of the parent with the methods setParentLocalName(String) and setParentNamespace(String). Default value is null. The value must be null for enveloping and detached XML signature. If the parameters parentXpath and parentLocalName are specified in the same configuration then an exception is thrown. XPathFilterParameter Spec plainText (sign) Indicator whether the message body contains plain text. The default value is false, indicating that the message body contains XML. The value can be overwritten by the header XmlSignatureConstants#HEADER_MESSAGE_IS_PLAIN_TEXT. false Boolean plainTextEncoding (sign) Encoding of the plain text. Only relevant if the message body is plain text (see parameter plainText. Default value is UTF-8. UTF-8 String prefixForXmlSignature Namespace (sign) Namespace prefix for the XML signature namespace http://www.w3.org/2000/09/xmldsig# . Default value is ds. If null or an empty value is set then no prefix is used for the XML signature namespace. See best practice http://www.w3.org/TR/xmldsig-bestpractices/#signing-xml- without-namespaces ds String properties (sign) For adding additional References and Objects to the XML signature which contain additional properties, you can provide a bean which implements the XmlSignatureProperties interface. XmlSignatureProperties signatureAlgorithm (sign) Signature algorithm. Default value is http://www.w3.org/2000/09/xmldsig#rsa-sha1 . http://www.w3.org/2000/09/xmldsig#rsa-sha1 String signatureId (sign) Sets the signature Id. If this parameter is not set (null value) then a unique ID is generated for the signature ID (default). If this parameter is set to (empty string) then no Id attribute is created in the signature element. String transformMethods (sign) Transforms which are executed on the message body before the digest is calculated. By default, C14n is added and in the case of enveloped signature (see option parentLocalName) also http://www.w3.org/2000/09/xmldsig#enveloped-signature is added at position 0 of the list. Use methods in XmlSignatureHelper to create the transform methods. List xpathsToIdAttributes (sign) Define the elements which are signed in the detached case via XPATH expressions to ID attributes (attributes of type ID). For each element found via the XPATH expression a detached signature is created whose reference URI contains the corresponding attribute value (preceded by '#'). The signature becomes the last sibling of the signed element. Elements with deeper hierarchy level are signed first. You can also set the XPATH list dynamically via the header XmlSignatureConstants#HEADER_XPATHS_TO_ID_ATTRIBUTES. The parameter setParentLocalName(String) or setParentXpath(XPathFilterParameterSpec) for enveloped signature and this parameter for detached signature must not be set in the same configuration. List keySelector (verify) Provides the key for validating the XML signature. KeySelector outputNodeSearch (verify) Sets the output node search value for determining the node from the XML signature document which shall be set to the output message body. The class of the value depends on the type of the output node search. The output node search is forwarded to XmlSignature2Message. String outputNodeSearchType (verify) Determines the search type for determining the output node which is serialized into the output message bodyF. See setOutputNodeSearch(Object). The supported default search types you can find in DefaultXmlSignature2Message. Default String removeSignatureElements (verify) Indicator whether the XML signature elements (elements with local name Signature and namesapce http://www.w3.org/2000/09/xmldsig# ) shall be removed from the document set to the output message. Normally, this is only necessary, if the XML signature is enveloped. The default value is Boolean#FALSE. This parameter is forwarded to XmlSignature2Message. This indicator has no effect if the output node search is of type DefaultXmlSignature2Message#OUTPUT_NODE_SEARCH_TYPE_DEFAULT.F false Boolean secureValidation (verify) Enables secure validation. If true then secure validation is enabled. true Boolean validationFailedHandler (verify) Handles the different validation failed situations. The default implementation throws specific exceptions for the different situations (All exceptions have the package name org.apache.camel.component.xmlsecurity.api and are a sub-class of XmlSignatureInvalidException. If the signature value validation fails, a XmlSignatureInvalidValueException is thrown. If a reference validation fails, a XmlSignatureInvalidContentHashException is thrown. For more detailed information, see the JavaDoc. ValidationFailedHandler xmlSignature2Message (verify) Bean which maps the XML signature to the output-message after the validation. How this mapping should be done can be configured by the options outputNodeSearchType, outputNodeSearch, and removeSignatureElements. The default implementation offers three possibilities which are related to the three output node search types Default, ElementName, and XPath. The default implementation determines a node which is then serialized and set to the body of the output message If the search type is ElementName then the output node (which must be in this case an element) is determined by the local name and namespace defined in the search value (see option outputNodeSearch). If the search type is XPath then the output node is determined by the XPath specified in the search value (in this case the output node can be of type Element, TextNode or Document). If the output node search type is Default then the following rules apply: In the enveloped XML signature case (there is a reference with URI= and transform http://www.w3.org/2000/09/xmldsig#enveloped-signature ), the incoming XML document without the Signature element is set to the output message body. In the non-enveloped XML signature case, the message body is determined from a referenced Object; this is explained in more detail in chapter Output Node Determination in Enveloping XML Signature Case. XmlSignature2Message xmlSignatureChecker (verify) This interface allows the application to check the XML signature before the validation is executed. This step is recommended in http://www.w3.org/TR/xmldsig-bestpractices/#check-what-is-signed XmlSignatureChecker 374.6. Spring Boot Auto-Configuration The component supports 63 options, which are listed below. Name Description Default Type camel.component.xmlsecurity.enabled Enable xmlsecurity component true Boolean camel.component.xmlsecurity.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.xmlsecurity.signer-configuration.add-key-info-reference In order to protect the KeyInfo element from tampering you can add a reference to the signed info element so that it is protected via the signature value. The default value is true. Only relevant when a KeyInfo is returned by KeyAccessor. and KeyInfo#getId() is not null. true Boolean camel.component.xmlsecurity.signer-configuration.base-uri You can set a base URI which is used in the URI dereferencing. Relative URIs are then concatenated with the base URI. String camel.component.xmlsecurity.signer-configuration.canonicalization-method Canonicalization method used to canonicalize the SignedInfo element before the digest is calculated. You can use the helper methods XmlSignatureHelper.getCanonicalizationMethod(String algorithm) or getCanonicalizationMethod(String algorithm, List inclusiveNamespacePrefixes) to create a canonicalization method. AlgorithmMethod camel.component.xmlsecurity.signer-configuration.canonicalization-method-name String camel.component.xmlsecurity.signer-configuration.clear-headers Determines if the XML signature specific headers be cleared after signing and verification. Defaults to true. true Boolean camel.component.xmlsecurity.signer-configuration.content-object-id Sets the content object Id attribute value. By default a UUID is generated. If you set the null value, then a new UUID will be generated. Only used in the enveloping case. String camel.component.xmlsecurity.signer-configuration.content-reference-type Type of the content reference. The default value is null. This value can be overwritten by the header XmlSignatureConstants#HEADER_CONTENT_REFERENCE_TYPE. String camel.component.xmlsecurity.signer-configuration.content-reference-uri Reference URI for the content to be signed. Only used in the enveloped case. If the reference URI contains an ID attribute value, then the resource schema URI ( setSchemaResourceUri(String)) must also be set because the schema validator will then find out which attributes are ID attributes. Will be ignored in the enveloping or detached case. String camel.component.xmlsecurity.signer-configuration.crypto-context-properties Sets the crypto context properties. See link XMLCryptoContext#setProperty(String, Object). Possible properties are defined in XMLSignContext an XMLValidateContext (see Supported Properties). The following properties are set by default to the value Boolean#TRUE for the XML validation. If you want to switch these features off you must set the property value to Boolean#FALSE. org.jcp.xml.dsig.validateManifests javax.xml.crypto.dsig.cacheReference Map camel.component.xmlsecurity.signer-configuration.digest-algorithm Digest algorithm URI. Optional parameter. This digest algorithm is used for calculating the digest of the input message. If this digest algorithm is not specified then the digest algorithm is calculated from the signature algorithm. Example: http://www.w3.org/2001/04/xmlenc#sha256 String camel.component.xmlsecurity.signer-configuration.disallow-doctype-decl Disallows that the incoming XML document contains DTD DOCTYPE declaration. The default value is Boolean#TRUE. true Boolean camel.component.xmlsecurity.signer-configuration.key-accessor For the signing process, a private key is necessary. You specify a key accessor bean which provides this private key. The key accessor bean must implement the KeyAccessor interface. The package org.apache.camel.component.xmlsecurity.api contains the default implementation class DefaultKeyAccessor which reads the private key from a Java keystore. KeyAccessor camel.component.xmlsecurity.signer-configuration.key-accessor-name String camel.component.xmlsecurity.signer-configuration.omit-xml-declaration Indicator whether the XML declaration in the outgoing message body should be omitted. Default value is false. Can be overwritten by the header XmlSignatureConstants#HEADER_OMIT_XML_DECLARATION. false Boolean camel.component.xmlsecurity.signer-configuration.output-xml-encoding The character encoding of the resulting signed XML document. If null then the encoding of the original XML document is used. String camel.component.xmlsecurity.signer-configuration.parent-local-name Local name of the parent element to which the XML signature element will be added. Only relevant for enveloped XML signature. Alternatively you can also use setParentXpath(XPathFilterParameterSpec). Default value is null. The value must be null for enveloping and detached XML signature. This parameter or the parameter setParentXpath(XPathFilterParameterSpec) for enveloped signature and the parameter setXpathsToIdAttributes(List) for detached signature must not be set in the same configuration. If the parameters parentXpath and parentLocalName are specified in the same configuration then an exception is thrown. String camel.component.xmlsecurity.signer-configuration.parent-namespace Namespace of the parent element to which the XML signature element will be added. String camel.component.xmlsecurity.signer-configuration.parent-xpath Sets the XPath to find the parent node in the enveloped case. Either you specify the parent node via this method or the local name and namespace of the parent with the methods setParentLocalName(String) and setParentNamespace(String). Default value is null. The value must be null for enveloping and detached XML signature. If the parameters parentXpath and parentLocalName are specified in the same configuration then an exception is thrown. XPathFilterParameter Spec camel.component.xmlsecurity.signer-configuration.plain-text Indicator whether the message body contains plain text. The default value is false, indicating that the message body contains XML. The value can be overwritten by the header XmlSignatureConstants#HEADER_MESSAGE_IS_PLAIN_TEXT. false Boolean camel.component.xmlsecurity.signer-configuration.plain-text-encoding Encoding of the plain text. Only relevant if the message body is plain text (see parameter plainText. Default value is UTF-8. UTF-8 String camel.component.xmlsecurity.signer-configuration.prefix-for-xml-signature-namespace Namespace prefix for the XML signature namespace http://www.w3.org/2000/09/xmldsig# . Default value is ds. If null or an empty value is set then no prefix is used for the XML signature namespace. See best practice http://www.w3.org/TR/xmldsig-bestpractices/#signing-xml- without-namespaces ds String camel.component.xmlsecurity.signer-configuration.properties For adding additional References and Objects to the XML signature which contain additional properties, you can provide a bean which implements the XmlSignatureProperties interface. XmlSignatureProperties camel.component.xmlsecurity.signer-configuration.properties-name String camel.component.xmlsecurity.signer-configuration.schema-resource-uri Classpath to the XML Schema. Must be specified in the detached XML Signature case for determining the ID attributes, might be set in the enveloped and enveloping case. If set, then the XML document is validated with the specified XML schema. The schema resource URI can be overwritten by the header XmlSignatureConstants#HEADER_SCHEMA_RESOURCE_URI. String camel.component.xmlsecurity.signer-configuration.signature-algorithm Signature algorithm. Default value is http://www.w3.org/2000/09/xmldsig#rsa-sha1 . http://www.w3.org/2000/09/xmldsig#rsa-sha1 String camel.component.xmlsecurity.signer-configuration.signature-id Sets the signature Id. If this parameter is not set (null value) then a unique ID is generated for the signature ID (default). If this parameter is set to (empty string) then no Id attribute is created in the signature element. String camel.component.xmlsecurity.signer-configuration.transform-methods Transforms which are executed on the message body before the digest is calculated. By default, C14n is added and in the case of enveloped signature (see option parentLocalName) also http://www.w3.org/2000/09/xmldsig#enveloped-signature is added at position 0 of the list. Use methods in XmlSignatureHelper to create the transform methods. List camel.component.xmlsecurity.signer-configuration.transform-methods-name String camel.component.xmlsecurity.signer-configuration.uri-dereferencer If you want to restrict the remote access via reference URIs, you can set an own dereferencer. Optional parameter. If not set the provider default dereferencer is used which can resolve URI fragments, HTTP, file and XPpointer URIs. Attention: The implementation is provider dependent! URIDereferencer camel.component.xmlsecurity.signer-configuration.xpaths-to-id-attributes Define the elements which are signed in the detached case via XPATH expressions to ID attributes (attributes of type ID). For each element found via the XPATH expression a detached signature is created whose reference URI contains the corresponding attribute value (preceded by '#'). The signature becomes the last sibling of the signed element. Elements with deeper hierarchy level are signed first. You can also set the XPATH list dynamically via the header XmlSignatureConstants#HEADER_XPATHS_TO_ID_ATTRIBUTES. The parameter setParentLocalName(String) or setParentXpath(XPathFilterParameterSpec) for enveloped signature and this parameter for detached signature must not be set in the same configuration. List camel.component.xmlsecurity.verifier-configuration.base-uri You can set a base URI which is used in the URI dereferencing. Relative URIs are then concatenated with the base URI. String camel.component.xmlsecurity.verifier-configuration.clear-headers Determines if the XML signature specific headers be cleared after signing and verification. Defaults to true. true Boolean camel.component.xmlsecurity.verifier-configuration.crypto-context-properties Sets the crypto context properties. See link XMLCryptoContext#setProperty(String, Object). Possible properties are defined in XMLSignContext an XMLValidateContext (see Supported Properties). The following properties are set by default to the value Boolean#TRUE for the XML validation. If you want to switch these features off you must set the property value to Boolean#FALSE. org.jcp.xml.dsig.validateManifests javax.xml.crypto.dsig.cacheReference Map camel.component.xmlsecurity.verifier-configuration.disallow-doctype-decl Disallows that the incoming XML document contains DTD DOCTYPE declaration. The default value is Boolean#TRUE. true Boolean camel.component.xmlsecurity.verifier-configuration.key-selector Provides the key for validating the XML signature. KeySelector camel.component.xmlsecurity.verifier-configuration.omit-xml-declaration Indicator whether the XML declaration in the outgoing message body should be omitted. Default value is false. Can be overwritten by the header XmlSignatureConstants#HEADER_OMIT_XML_DECLARATION. false Boolean camel.component.xmlsecurity.verifier-configuration.output-node-search Sets the output node search value for determining the node from the XML signature document which shall be set to the output message body. The class of the value depends on the type of the output node search. The output node search is forwarded to XmlSignature2Message. Object camel.component.xmlsecurity.verifier-configuration.output-node-search-type Determines the search type for determining the output node which is serialized into the output message bodyF. See setOutputNodeSearch(Object). The supported default search types you can find in DefaultXmlSignature2Message. Default String camel.component.xmlsecurity.verifier-configuration.output-xml-encoding The character encoding of the resulting signed XML document. If null then the encoding of the original XML document is used. String camel.component.xmlsecurity.verifier-configuration.remove-signature-elements Indicator whether the XML signature elements (elements with local name Signature and namesapce http://www.w3.org/2000/09/xmldsig# ) shall be removed from the document set to the output message. Normally, this is only necessary, if the XML signature is enveloped. The default value is Boolean#FALSE. This parameter is forwarded to XmlSignature2Message. This indicator has no effect if the output node search is of type DefaultXmlSignature2Message#OUTPUT_NODE_SEARCH_TYPE_DEFAULT.F false Boolean camel.component.xmlsecurity.verifier-configuration.schema-resource-uri Classpath to the XML Schema. Must be specified in the detached XML Signature case for determining the ID attributes, might be set in the enveloped and enveloping case. If set, then the XML document is validated with the specified XML schema. The schema resource URI can be overwritten by the header XmlSignatureConstants#HEADER_SCHEMA_RESOURCE_URI. String camel.component.xmlsecurity.verifier-configuration.secure-validation Enables secure validation. If true then secure validation is enabled. true Boolean camel.component.xmlsecurity.verifier-configuration.uri-dereferencer If you want to restrict the remote access via reference URIs, you can set an own dereferencer. Optional parameter. If not set the provider default dereferencer is used which can resolve URI fragments, HTTP, file and XPpointer URIs. Attention: The implementation is provider dependent! URIDereferencer camel.component.xmlsecurity.verifier-configuration.validation-failed-handler Handles the different validation failed situations. The default implementation throws specific exceptions for the different situations (All exceptions have the package name org.apache.camel.component.xmlsecurity.api and are a sub-class of XmlSignatureInvalidException. If the signature value validation fails, a XmlSignatureInvalidValueException is thrown. If a reference validation fails, a XmlSignatureInvalidContentHashException is thrown. For more detailed information, see the JavaDoc. ValidationFailedHandler camel.component.xmlsecurity.verifier-configuration.validation-failed-handler-name Name of handler to @param validationFailedHandlerName String camel.component.xmlsecurity.verifier-configuration.xml-signature-checker This interface allows the application to check the XML signature before the validation is executed. This step is recommended in http://www.w3.org/TR/xmldsig-bestpractices/#check-what-is-signed XmlSignatureChecker camel.component.xmlsecurity.verifier-configuration.xml-signature2-message Bean which maps the XML signature to the output-message after the validation. How this mapping should be done can be configured by the options outputNodeSearchType, outputNodeSearch, and removeSignatureElements. The default implementation offers three possibilities which are related to the three output node search types Default, ElementName, and XPath. The default implementation determines a node which is then serialized and set to the body of the output message If the search type is ElementName then the output node (which must be in this case an element) is determined by the local name and namespace defined in the search value (see option outputNodeSearch). If the search type is XPath then the output node is determined by the XPath specified in the search value (in this case the output node can be of type Element, TextNode or Document). If the output node search type is Default then the following rules apply: In the enveloped XML signature case (there is a reference with URI= and transform http://www.w3.org/2000/09/xmldsig#enveloped-signature ), the incoming XML document without the Signature element is set to the output message body. In the non-enveloped XML signature case, the message body is determined from a referenced Object; this is explained in more detail in chapter Output Node Determination in Enveloping XML Signature Case. XmlSignature2Message camel.dataformat.securexml.add-key-value-for-encrypted-key Whether to add the public key used to encrypt the session key as a KeyValue in the EncryptedKey structure or not. true Boolean camel.dataformat.securexml.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.securexml.digest-algorithm The digest algorithm to use with the RSA OAEP algorithm. The available choices are: XMLCipher.SHA1 XMLCipher.SHA256 XMLCipher.SHA512 The default value is XMLCipher.SHA1 SHA1 String camel.dataformat.securexml.enabled Enable securexml dataformat true Boolean camel.dataformat.securexml.key-cipher-algorithm The cipher algorithm to be used for encryption/decryption of the asymmetric key. The available choices are: XMLCipher.RSA_v1dot5 XMLCipher.RSA_OAEP XMLCipher.RSA_OAEP_11 The default value is XMLCipher.RSA_OAEP RSA_OAEP String camel.dataformat.securexml.key-or-trust-store-parameters-id Refers to a KeyStore instance to lookup in the registry, which is used for configuration options for creating and loading a KeyStore instance that represents the sender's trustStore or recipient's keyStore. String camel.dataformat.securexml.key-password The password to be used for retrieving the private key from the KeyStore. This key is used for asymmetric decryption. String camel.dataformat.securexml.mgf-algorithm The MGF Algorithm to use with the RSA OAEP algorithm. The available choices are: EncryptionConstants.MGF1_SHA1 EncryptionConstants.MGF1_SHA256 EncryptionConstants.MGF1_SHA512 The default value is EncryptionConstants.MGF1_SHA1 MGF1_SHA1 String camel.dataformat.securexml.pass-phrase A String used as passPhrase to encrypt/decrypt content. The passPhrase has to be provided. If no passPhrase is specified, a default passPhrase is used. The passPhrase needs to be put together in conjunction with the appropriate encryption algorithm. For example using TRIPLEDES the passPhase can be a Only another 24 Byte key String camel.dataformat.securexml.pass-phrase-byte A byte used as passPhrase to encrypt/decrypt content. The passPhrase has to be provided. If no passPhrase is specified, a default passPhrase is used. The passPhrase needs to be put together in conjunction with the appropriate encryption algorithm. For example using TRIPLEDES the passPhase can be a Only another 24 Byte key Byte[] camel.dataformat.securexml.recipient-key-alias The key alias to be used when retrieving the recipient's public or private key from a KeyStore when performing asymmetric key encryption or decryption. String camel.dataformat.securexml.secure-tag The XPath reference to the XML Element selected for encryption/decryption. If no tag is specified, the entire payload is encrypted/decrypted. String camel.dataformat.securexml.secure-tag-contents A boolean value to specify whether the XML Element is to be encrypted or the contents of the XML Element false = Element Level true = Element Content Level false Boolean camel.dataformat.securexml.xml-cipher-algorithm The cipher algorithm to be used for encryption/decryption of the XML message content. The available choices are: XMLCipher.TRIPLEDES XMLCipher.AES_128 XMLCipher.AES_128_GCM XMLCipher.AES_192 XMLCipher.AES_192_GCM XMLCipher.AES_256 XMLCipher.AES_256_GCM XMLCipher.SEED_128 XMLCipher.CAMELLIA_128 XMLCipher.CAMELLIA_192 XMLCipher.CAMELLIA_256 The default value is MLCipher.TRIPLEDES TRIPLEDES String 374.6.1. Output Node Determination in Enveloping XML Signature Case After the validation the node is extracted from the XML signature document which is finally returned to the output-message body. In the enveloping XML signature case, the default implementation DefaultXmlSignature2Message of XmlSignature2Message does this for the node search type Default in the following way (see option xmlSignature2Message ): First an object reference is determined: Only same document references are taken into account (URI must start with # ) Also indirect same document references to an object via manifest are taken into account. The resulting number of object references must be 1. Then, the object is dereferenced and the object must only contain one XML element. This element is returned as output node. This does mean that the enveloping XML signature must have either the structure: <Signature> <SignedInfo> <Reference URI="#object"/> <!-- further references possible but they must not point to an Object or Manifest containing an object reference --> ... </SignedInfo> <Object Id="object"> <!-- contains one XML element which is extracted to the message body --> <Object> <!-- further object elements possible which are not referenced--> ... (<KeyInfo>)? </Signature> or the structure: <Signature> <SignedInfo> <Reference URI="#manifest"/> <!-- further references are possible but they must not point to an Object or other manifest containing an object reference --> ... </SignedInfo> <Object > <Manifest Id="manifest"> <Reference URI=#object/> </Manifest> </Objet> <Object Id="object"> <!-- contains the DOM node which is extracted to the message body --> </Object> <!-- further object elements possible which are not referenced --> ... (<KeyInfo>)? </Signature> 374.7. Detached XML Signatures as Siblings of the Signed Elements Since 2.14.0 You can create detached signatures where the signature is a sibling of the signed element. The following example contains two detached signatures. The first signature is for the element C and the second signature is for element A . The signatures are nested ; the second signature is for the element A which also contains the first signature. Example Detached XML Signatures <?xml version="1.0" encoding="UTF-8" ?> <root> <A ID="IDforA"> <B> <C ID="IDforC"> <D>dvalue</D> </C> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="_6bf13099-0568-4d76-8649-faf5dcb313c0"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315" /> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" /> <ds:Reference URI="#IDforC"> ... </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>aUDFmiG71</ds:SignatureValue> </ds:Signature> </B> </A> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"Id="_6b02fb8a-30df-42c6-ba25-76eba02c8214"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315" /> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" /> <ds:Reference URI="#IDforA"> ... </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>q3tvRoGgc8cMUqUSzP6C21zb7tt04riPnDuk=</ds:SignatureValue> </ds:Signature> <root> The example shows that you can sign several elements and that for each element a signature is created as sibling. The elements to be signed must have an attribute of type ID. The ID type of the attribute must be defined in the XML schema (see option schemaResourceUri ). You specify a list of XPATH expressions pointing to attributes of type ID (see option xpathsToIdAttributes ). These attributes determine the elements to be signed. The elements are signed by the same key given by the keyAccessor bean. Elements with higher (i.e. deeper) hierarchy level are signed first. In the example, the element C is signed before the element A . Java DSL Example from("direct:detached") .to("xmlsecurity:sign://detached?keyAccessor=#keyAccessorBeant&xpathsToIdAttributes=#xpathsToIdAttributesBean&schemaResourceUri=Test.xsd") .to("xmlsecurity:verify://detached?keySelector=#keySelectorBean&schemaResourceUri=org/apache/camel/component/xmlsecurity/Test.xsd") .to("mock:result"); Spring Example <bean id="xpathsToIdAttributesBean" class="java.util.ArrayList"> <constructor-arg type="java.util.Collection"> <list> <bean class="org.apache.camel.component.xmlsecurity.api.XmlSignatureHelper" factory-method="getXpathFilter"> <constructor-arg type="java.lang.String" value="/ns:root/a/@ID" /> <constructor-arg> <map key-type="java.lang.String" value-type="java.lang.String"> <entry key="ns" value="http://test" /> </map> </constructor-arg> </bean> </list> </constructor-arg> </bean> ... <from uri="direct:detached" /> <to uri="xmlsecurity:sign://detached?keyAccessor=#keyAccessorBean&xpathsToIdAttributes=#xpathsToIdAttributesBean&schemaResourceUri=Test.xsd" /> <to uri="xmlsecurity:verify://detached?keySelector=#keySelectorBean&schemaResourceUri=Test.xsd" /> <to uri="mock:result" /> 374.8. XAdES-BES/EPES for the Signer Endpoint Available as of Camel 2.15.0 XML Advanced Electronic Signatures (XAdES) defines extensions to XML Signature. This standard was defined by the European Telecommunication Standards Institute and allows you to create signatures which are compliant to the European Union Directive (1999/93/EC) on a Community framework for electronic signatures . XAdES defines different sets of signature properties which are called signature forms. We support the signature forms Basic Electronic Signature (XAdES-BES) and Explicit Policy Based Electronic Signature (XAdES-EPES) for the Signer Endpoint. The forms Electronic Signature with Validation Data XAdES-T and XAdES-C are not supported. We support the following properties of the XAdES-EPES form ("?" denotes zero or one occurrence): Supported XAdES-EPES Properties <QualifyingProperties Target> <SignedProperties> <SignedSignatureProperties> (SigningTime)? (SigningCertificate)? (SignaturePolicyIdentifier) (SignatureProductionPlace)? (SignerRole)? </SignedSignatureProperties> <SignedDataObjectProperties> (DataObjectFormat)? (CommitmentTypeIndication)? </SignedDataObjectProperties> </SignedProperties> </QualifyingProperties> The properties of the XAdES-BES form are the same except that the SignaturePolicyIdentifier property is not part of XAdES-BES. You can configure the XAdES-BES/EPES properties via the bean org.apache.camel.component.xmlsecurity.api.XAdESSignatureProperties or org.apache.camel.component.xmlsecurity.api.DefaultXAdESSignatureProperties. XAdESSignatureProperties does support all properties mentioned above except the SigningCertificate property. To get the SigningCertificate property, you must overwrite either the method XAdESSignatureProperties.getSigningCertificate() or XAdESSignatureProperties.getSigningCertificateChain() . The class DefaultXAdESSignatureProperties overwrites the method getSigningCertificate() and allows you to specify the signing certificate via a keystore and alias. The following example shows all parameters you can specify. If you do not need certain parameters you can just omit them. XAdES-BES/EPES Example in Java DSL Keystore keystore = ... // load a keystore DefaultKeyAccessor accessor = new DefaultKeyAccessor(); accessor.setKeyStore(keystore); accessor.setPassword("password"); accessor.setAlias("cert_alias"); // signer key alias DefaultXAdESSignatureProperties props = new DefaultXAdESSignatureProperties(); props.setNamespace("http://uri.etsi.org/01903/v1.3.2#"); // sets the namespace for the XAdES elements; the namspace is related to the XAdES version, default value is "http://uri.etsi.org/01903/v1.3.2#", other possible values are "http://uri.etsi.org/01903/v1.1.1#" and "http://uri.etsi.org/01903/v1.2.2#" props.setPrefix("etsi"); // sets the prefix for the XAdES elements, default value is "etsi" // signing certificate props.setKeystore(keystore)); props.setAlias("cert_alias"); // specify the alias of the signing certificate in the keystore = signer key alias props.setDigestAlgorithmForSigningCertificate(DigestMethod.SHA256); // possible values for the algorithm are "http://www.w3.org/2000/09/xmldsig#sha1", "http://www.w3.org/2001/04/xmlenc#sha256", "http://www.w3.org/2001/04/xmldsig-more#sha384", "http://www.w3.org/2001/04/xmlenc#sha512", default value is "http://www.w3.org/2001/04/xmlenc#sha256" props.setSigningCertificateURIs(Collections.singletonList("http://certuri")); // signing time props.setAddSigningTime(true); // policy props.setSignaturePolicy(XAdESSignatureProperties.SIG_POLICY_EXPLICIT_ID); // also the values XAdESSignatureProperties.SIG_POLICY_NONE ("None"), and XAdESSignatureProperties.SIG_POLICY_IMPLIED ("Implied")are possible, default value is XAdESSignatureProperties.SIG_POLICY_EXPLICIT_ID ("ExplicitId") // For "None" and "Implied" you must not specify any further policy parameters props.setSigPolicyId("urn:oid:1.2.840.113549.1.9.16.6.1"); props.setSigPolicyIdQualifier("OIDAsURN"); //allowed values are empty string, "OIDAsURI", "OIDAsURN"; default value is empty string props.setSigPolicyIdDescription("invoice version 3.1"); props.setSignaturePolicyDigestAlgorithm(DigestMethod.SHA256);// possible values for the algorithm are "http://www.w3.org/2000/09/xmldsig#sha1", http://www.w3.org/2001/04/xmlenc#sha256", "http://www.w3.org/2001/04/xmldsig-more#sha384", "http://www.w3.org/2001/04/xmlenc#sha512", default value is http://www.w3.org/2001/04/xmlenc#sha256" props.setSignaturePolicyDigestValue("Ohixl6upD6av8N7pEvDABhEL6hM="); // you can add qualifiers for the signature policy either by specifying text or an XML fragment with the root element "SigPolicyQualifier" props.setSigPolicyQualifiers(Arrays .asList(new String[] { "<SigPolicyQualifier xmlns=\"http://uri.etsi.org/01903/v1.3.2#\"><SPURI>http://test.com/sig.policy.pdf</SPURI><SPUserNotice><ExplicitText>display text</ExplicitText>" + "</SPUserNotice></SigPolicyQualifier>", "category B" })); props.setSigPolicyIdDocumentationReferences(Arrays.asList(new String[] {"http://test.com/policy.doc.ref1.txt", "http://test.com/policy.doc.ref2.txt" })); // production place props.setSignatureProductionPlaceCity("Munich"); props.setSignatureProductionPlaceCountryName("Germany"); props.setSignatureProductionPlacePostalCode("80331"); props.setSignatureProductionPlaceStateOrProvince("Bavaria"); //role // you can add claimed roles either by specifying text or an XML fragment with the root element "ClaimedRole" props.setSignerClaimedRoles(Arrays.asList(new String[] {"test", "<a:ClaimedRole xmlns:a=\"http://uri.etsi.org/01903/v1.3.2#\"><TestRole>TestRole</TestRole></a:ClaimedRole>" })); props.setSignerCertifiedRoles(Collections.singletonList(new XAdESEncapsulatedPKIData("Ahixl6upD6av8N7pEvDABhEL6hM=", "http://uri.etsi.org/01903/v1.2.2#DER", "IdCertifiedRole"))); // data object format props.setDataObjectFormatDescription("invoice"); props.setDataObjectFormatMimeType("text/xml"); props.setDataObjectFormatIdentifier("urn:oid:1.2.840.113549.1.9.16.6.2"); props.setDataObjectFormatIdentifierQualifier("OIDAsURN"); //allowed values are empty string, "OIDAsURI", "OIDAsURN"; default value is empty string props.setDataObjectFormatIdentifierDescription("identifier desc"); props.setDataObjectFormatIdentifierDocumentationReferences(Arrays.asList(new String[] { "http://test.com/dataobject.format.doc.ref1.txt", "http://test.com/dataobject.format.doc.ref2.txt" })); //commitment props.setCommitmentTypeId("urn:oid:1.2.840.113549.1.9.16.6.4"); props.setCommitmentTypeIdQualifier("OIDAsURN"); //allowed values are empty string, "OIDAsURI", "OIDAsURN"; default value is empty string props.setCommitmentTypeIdDescription("description for commitment type ID"); props.setCommitmentTypeIdDocumentationReferences(Arrays.asList(new String[] {"http://test.com/commitment.ref1.txt", "http://test.com/commitment.ref2.txt" })); // you can specify a commitment type qualifier either by simple text or an XML fragment with root element "CommitmentTypeQualifier" props.setCommitmentTypeQualifiers(Arrays.asList(new String[] {"commitment qualifier", "<c:CommitmentTypeQualifier xmlns:c=\"http://uri.etsi.org/01903/v1.3.2#\"><C>c</C></c:CommitmentTypeQualifier>" })); beanRegistry.bind("xmlSignatureProperties",props); beanRegistry.bind("keyAccessorDefault",keyAccessor); // you must reference the properties bean in the "xmlsecurity" URI from("direct:xades").to("xmlsecurity:sign://xades?keyAccessor=#keyAccessorDefault&properties=#xmlSignatureProperties") .to("mock:result"); XAdES-BES/EPES Example in Spring XML ... <from uri="direct:xades" /> <to uri="xmlsecurity:sign://xades?keyAccessor=#accessorRsa&properties=#xadesProperties" /> <to uri="mock:result" /> ... <bean id="xadesProperties" class="org.apache.camel.component.xmlsecurity.api.XAdESSignatureProperties"> <!-- For more properties see the Java DSL example. If you want to have a signing certificate then use the bean class DefaultXAdESSignatureProperties (see the Java DSL example). --> <property name="signaturePolicy" value="ExplicitId" /> <property name="sigPolicyId" value="http://www.test.com/policy.pdf" /> <property name="sigPolicyIdDescription" value="factura" /> <property name="signaturePolicyDigestAlgorithm" value="http://www.w3.org/2000/09/xmldsig#sha1" /> <property name="signaturePolicyDigestValue" value="Ohixl6upD6av8N7pEvDABhEL1hM=" /> <property name="signerClaimedRoles" ref="signerClaimedRoles_XMLSigner" /> <property name="dataObjectFormatDescription" value="Factura electronica" /> <property name="dataObjectFormatMimeType" value="text/xml" /> </bean> <bean class="java.util.ArrayList" id="signerClaimedRoles_XMLSigner"> <constructor-arg> <list> <value>Emisor</value> <value><ClaimedRole xmlns="http://uri.etsi.org/01903/v1.3.2#"><test xmlns="http://test.com/">test</test></ClaimedRole></value> </list> </constructor-arg> </bean> 374.8.1. Headers Header Type Description CamelXmlSignatureXAdESQualifyingPropertiesId String for the 'Id' attribute value of QualifyingProperties element CamelXmlSignatureXAdESSignedDataObjectPropertiesId String for the 'Id' attribute value of SignedDataObjectProperties element CamelXmlSignatureXAdESSignedSignaturePropertiesId String for the 'Id' attribute value of SignedSignatureProperties element CamelXmlSignatureXAdESDataObjectFormatEncoding String for the value of the Encoding element of the DataObjectFormat element CamelXmlSignatureXAdESNamespace String overwrites the XAdES namespace parameter value CamelXmlSignatureXAdESPrefix String overwrites the XAdES prefix parameter value 374.8.2. Limitations with regard to XAdES version 1.4.2 No support for signature form XAdES-T and XAdES-C Only signer part implemented. Verifier part currently not available. No support for the QualifyingPropertiesReference element (see section 6.3.2 of spec). No support for the Transforms element contained in the SignaturePolicyId element contained in the SignaturePolicyIdentifier element No support of the CounterSignature element no support for the UnsignedProperties element At most one DataObjectFormat element. More than one DataObjectFormat element makes no sense because we have only one data object which is signed (this is the incoming message body to the XML signer endpoint). At most one CommitmentTypeIndication element. More than one CommitmentTypeIndication element makes no sense because we have only one data object which is signed (this is the incoming message body to the XML signer endpoint). A CommitmentTypeIndication element contains always the AllSignedDataObjects element. The ObjectReference element within CommitmentTypeIndication element is not supported. The AllDataObjectsTimeStamp element is not supported The IndividualDataObjectsTimeStamp element is not supported 374.9. See Also Best Practices | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmlsecurity</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"<[parent element]> ... <!-- Signature element is added as last child of the parent element--> <Signature Id=\"generated_unique_signature_id\"> <SignedInfo> <Reference URI=\"\"> <Transform Algorithm=\"http://www.w3.org/2000/09/xmldsig#enveloped-signature\"/> (<Transform>)* <!-- By default \"http://www.w3.org/2006/12/xml-c14n11\" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI=\"#[keyinfo_Id]\"> <Transform Algorithm=\"http://www.w3.org/TR/2001/REC-xml-c14n-20010315\"/> <DigestMethod> <DigestValue> </Reference>)? <!-- further references possible, see option 'properties' below --> </SignedInfo> <SignatureValue> (<KeyInfo Id=\"[keyinfo_id]\">)? <!-- Object elements possible, see option 'properties' below --> </Signature> </[parent element]>",
"<Signature Id=\"generated_unique_signature_id\"> <SignedInfo> <Reference URI=\"#generated_unique_object_id\" type=\"[optional_type_value]\"> (<Transform>)* <!-- By default \"http://www.w3.org/2006/12/xml-c14n11\" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI=\"#[keyinfo_id]\"> <Transform Algorithm=\"http://www.w3.org/TR/2001/REC-xml-c14n-20010315\"/> <DigestMethod> <DigestValue> </Reference>)? <!-- further references possible, see option 'properties' below --> </SignedInfo> <SignatureValue> (<KeyInfo Id=\"[keyinfo_id]\">)? <Object Id=\"generated_unique_object_id\"/> <!-- The Object element contains the in-message body; the object ID can either be generated or set by the option parameter \"contentObjectId\" --> <!-- Further Object elements possible, see option 'properties' below --> </Signature>",
"(<[signed element] Id=\"[id_value]\"> <!-- signed element must have an attribute of type ID --> </[signed element]> <other sibling/>* <!-- between the signed element and the corresponding signature element, there can be other siblings. Signature element is added as last sibling. --> <Signature Id=\"generated_unique_ID\"> <SignedInfo> <CanonicalizationMethod> <SignatureMethod> <Reference URI=\"#[id_value]\" type=\"[optional_type_value]\"> <!-- reference URI contains the ID attribute value of the signed element --> (<Transform>)* <!-- By default \"http://www.w3.org/2006/12/xml-c14n11\" is added to the transforms --> <DigestMethod> <DigestValue> </Reference> (<Reference URI=\"#[generated_keyinfo_Id]\"> <Transform Algorithm=\"http://www.w3.org/TR/2001/REC-xml-c14n-20010315\"/> <DigestMethod> <DigestValue> </Reference>)? </SignedInfo> <SignatureValue> (<KeyInfo Id=\"[generated_keyinfo_id]\">)? </Signature>)+",
"xmlsecurity:sign:name[?options] xmlsecurity:verify:name[?options]",
"from(\"direct:enveloping\").to(\"xmlsecurity:sign://enveloping?keyAccessor=#accessor\", \"xmlsecurity:verify://enveloping?keySelector=#selector\", \"mock:result\")",
"<from uri=\"direct:enveloping\" /> <to uri=\"xmlsecurity:sign://enveloping?keyAccessor=#accessor\" /> <to uri=\"xmlsecurity:verify://enveloping?keySelector=#selector\" /> <to uri=\"mock:result\" />",
"xmlsecurity:command:name",
"<Signature> <SignedInfo> <Reference URI=\"#object\"/> <!-- further references possible but they must not point to an Object or Manifest containing an object reference --> </SignedInfo> <Object Id=\"object\"> <!-- contains one XML element which is extracted to the message body --> <Object> <!-- further object elements possible which are not referenced--> (<KeyInfo>)? </Signature>",
"<Signature> <SignedInfo> <Reference URI=\"#manifest\"/> <!-- further references are possible but they must not point to an Object or other manifest containing an object reference --> </SignedInfo> <Object > <Manifest Id=\"manifest\"> <Reference URI=#object/> </Manifest> </Objet> <Object Id=\"object\"> <!-- contains the DOM node which is extracted to the message body --> </Object> <!-- further object elements possible which are not referenced --> (<KeyInfo>)? </Signature>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <root> <A ID=\"IDforA\"> <B> <C ID=\"IDforC\"> <D>dvalue</D> </C> <ds:Signature xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" Id=\"_6bf13099-0568-4d76-8649-faf5dcb313c0\"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm=\"http://www.w3.org/TR/2001/REC-xml-c14n-20010315\" /> <ds:SignatureMethod Algorithm=\"http://www.w3.org/2000/09/xmldsig#rsa-sha1\" /> <ds:Reference URI=\"#IDforC\"> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>aUDFmiG71</ds:SignatureValue> </ds:Signature> </B> </A> <ds:Signature xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\"Id=\"_6b02fb8a-30df-42c6-ba25-76eba02c8214\"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm=\"http://www.w3.org/TR/2001/REC-xml-c14n-20010315\" /> <ds:SignatureMethod Algorithm=\"http://www.w3.org/2000/09/xmldsig#rsa-sha1\" /> <ds:Reference URI=\"#IDforA\"> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>q3tvRoGgc8cMUqUSzP6C21zb7tt04riPnDuk=</ds:SignatureValue> </ds:Signature> <root>",
"from(\"direct:detached\") .to(\"xmlsecurity:sign://detached?keyAccessor=#keyAccessorBeant&xpathsToIdAttributes=#xpathsToIdAttributesBean&schemaResourceUri=Test.xsd\") .to(\"xmlsecurity:verify://detached?keySelector=#keySelectorBean&schemaResourceUri=org/apache/camel/component/xmlsecurity/Test.xsd\") .to(\"mock:result\");",
"<bean id=\"xpathsToIdAttributesBean\" class=\"java.util.ArrayList\"> <constructor-arg type=\"java.util.Collection\"> <list> <bean class=\"org.apache.camel.component.xmlsecurity.api.XmlSignatureHelper\" factory-method=\"getXpathFilter\"> <constructor-arg type=\"java.lang.String\" value=\"/ns:root/a/@ID\" /> <constructor-arg> <map key-type=\"java.lang.String\" value-type=\"java.lang.String\"> <entry key=\"ns\" value=\"http://test\" /> </map> </constructor-arg> </bean> </list> </constructor-arg> </bean> <from uri=\"direct:detached\" /> <to uri=\"xmlsecurity:sign://detached?keyAccessor=#keyAccessorBean&xpathsToIdAttributes=#xpathsToIdAttributesBean&schemaResourceUri=Test.xsd\" /> <to uri=\"xmlsecurity:verify://detached?keySelector=#keySelectorBean&schemaResourceUri=Test.xsd\" /> <to uri=\"mock:result\" />",
"<QualifyingProperties Target> <SignedProperties> <SignedSignatureProperties> (SigningTime)? (SigningCertificate)? (SignaturePolicyIdentifier) (SignatureProductionPlace)? (SignerRole)? </SignedSignatureProperties> <SignedDataObjectProperties> (DataObjectFormat)? (CommitmentTypeIndication)? </SignedDataObjectProperties> </SignedProperties> </QualifyingProperties>",
"Keystore keystore = ... // load a keystore DefaultKeyAccessor accessor = new DefaultKeyAccessor(); accessor.setKeyStore(keystore); accessor.setPassword(\"password\"); accessor.setAlias(\"cert_alias\"); // signer key alias DefaultXAdESSignatureProperties props = new DefaultXAdESSignatureProperties(); props.setNamespace(\"http://uri.etsi.org/01903/v1.3.2#\"); // sets the namespace for the XAdES elements; the namspace is related to the XAdES version, default value is \"http://uri.etsi.org/01903/v1.3.2#\", other possible values are \"http://uri.etsi.org/01903/v1.1.1#\" and \"http://uri.etsi.org/01903/v1.2.2#\" props.setPrefix(\"etsi\"); // sets the prefix for the XAdES elements, default value is \"etsi\" // signing certificate props.setKeystore(keystore)); props.setAlias(\"cert_alias\"); // specify the alias of the signing certificate in the keystore = signer key alias props.setDigestAlgorithmForSigningCertificate(DigestMethod.SHA256); // possible values for the algorithm are \"http://www.w3.org/2000/09/xmldsig#sha1\", \"http://www.w3.org/2001/04/xmlenc#sha256\", \"http://www.w3.org/2001/04/xmldsig-more#sha384\", \"http://www.w3.org/2001/04/xmlenc#sha512\", default value is \"http://www.w3.org/2001/04/xmlenc#sha256\" props.setSigningCertificateURIs(Collections.singletonList(\"http://certuri\")); // signing time props.setAddSigningTime(true); // policy props.setSignaturePolicy(XAdESSignatureProperties.SIG_POLICY_EXPLICIT_ID); // also the values XAdESSignatureProperties.SIG_POLICY_NONE (\"None\"), and XAdESSignatureProperties.SIG_POLICY_IMPLIED (\"Implied\")are possible, default value is XAdESSignatureProperties.SIG_POLICY_EXPLICIT_ID (\"ExplicitId\") // For \"None\" and \"Implied\" you must not specify any further policy parameters props.setSigPolicyId(\"urn:oid:1.2.840.113549.1.9.16.6.1\"); props.setSigPolicyIdQualifier(\"OIDAsURN\"); //allowed values are empty string, \"OIDAsURI\", \"OIDAsURN\"; default value is empty string props.setSigPolicyIdDescription(\"invoice version 3.1\"); props.setSignaturePolicyDigestAlgorithm(DigestMethod.SHA256);// possible values for the algorithm are \"http://www.w3.org/2000/09/xmldsig#sha1\", http://www.w3.org/2001/04/xmlenc#sha256\", \"http://www.w3.org/2001/04/xmldsig-more#sha384\", \"http://www.w3.org/2001/04/xmlenc#sha512\", default value is http://www.w3.org/2001/04/xmlenc#sha256\" props.setSignaturePolicyDigestValue(\"Ohixl6upD6av8N7pEvDABhEL6hM=\"); // you can add qualifiers for the signature policy either by specifying text or an XML fragment with the root element \"SigPolicyQualifier\" props.setSigPolicyQualifiers(Arrays .asList(new String[] { \"<SigPolicyQualifier xmlns=\\\"http://uri.etsi.org/01903/v1.3.2#\\\"><SPURI>http://test.com/sig.policy.pdf</SPURI><SPUserNotice><ExplicitText>display text</ExplicitText>\" + \"</SPUserNotice></SigPolicyQualifier>\", \"category B\" })); props.setSigPolicyIdDocumentationReferences(Arrays.asList(new String[] {\"http://test.com/policy.doc.ref1.txt\", \"http://test.com/policy.doc.ref2.txt\" })); // production place props.setSignatureProductionPlaceCity(\"Munich\"); props.setSignatureProductionPlaceCountryName(\"Germany\"); props.setSignatureProductionPlacePostalCode(\"80331\"); props.setSignatureProductionPlaceStateOrProvince(\"Bavaria\"); //role // you can add claimed roles either by specifying text or an XML fragment with the root element \"ClaimedRole\" props.setSignerClaimedRoles(Arrays.asList(new String[] {\"test\", \"<a:ClaimedRole xmlns:a=\\\"http://uri.etsi.org/01903/v1.3.2#\\\"><TestRole>TestRole</TestRole></a:ClaimedRole>\" })); props.setSignerCertifiedRoles(Collections.singletonList(new XAdESEncapsulatedPKIData(\"Ahixl6upD6av8N7pEvDABhEL6hM=\", \"http://uri.etsi.org/01903/v1.2.2#DER\", \"IdCertifiedRole\"))); // data object format props.setDataObjectFormatDescription(\"invoice\"); props.setDataObjectFormatMimeType(\"text/xml\"); props.setDataObjectFormatIdentifier(\"urn:oid:1.2.840.113549.1.9.16.6.2\"); props.setDataObjectFormatIdentifierQualifier(\"OIDAsURN\"); //allowed values are empty string, \"OIDAsURI\", \"OIDAsURN\"; default value is empty string props.setDataObjectFormatIdentifierDescription(\"identifier desc\"); props.setDataObjectFormatIdentifierDocumentationReferences(Arrays.asList(new String[] { \"http://test.com/dataobject.format.doc.ref1.txt\", \"http://test.com/dataobject.format.doc.ref2.txt\" })); //commitment props.setCommitmentTypeId(\"urn:oid:1.2.840.113549.1.9.16.6.4\"); props.setCommitmentTypeIdQualifier(\"OIDAsURN\"); //allowed values are empty string, \"OIDAsURI\", \"OIDAsURN\"; default value is empty string props.setCommitmentTypeIdDescription(\"description for commitment type ID\"); props.setCommitmentTypeIdDocumentationReferences(Arrays.asList(new String[] {\"http://test.com/commitment.ref1.txt\", \"http://test.com/commitment.ref2.txt\" })); // you can specify a commitment type qualifier either by simple text or an XML fragment with root element \"CommitmentTypeQualifier\" props.setCommitmentTypeQualifiers(Arrays.asList(new String[] {\"commitment qualifier\", \"<c:CommitmentTypeQualifier xmlns:c=\\\"http://uri.etsi.org/01903/v1.3.2#\\\"><C>c</C></c:CommitmentTypeQualifier>\" })); beanRegistry.bind(\"xmlSignatureProperties\",props); beanRegistry.bind(\"keyAccessorDefault\",keyAccessor); // you must reference the properties bean in the \"xmlsecurity\" URI from(\"direct:xades\").to(\"xmlsecurity:sign://xades?keyAccessor=#keyAccessorDefault&properties=#xmlSignatureProperties\") .to(\"mock:result\");",
"<from uri=\"direct:xades\" /> <to uri=\"xmlsecurity:sign://xades?keyAccessor=#accessorRsa&properties=#xadesProperties\" /> <to uri=\"mock:result\" /> <bean id=\"xadesProperties\" class=\"org.apache.camel.component.xmlsecurity.api.XAdESSignatureProperties\"> <!-- For more properties see the previous Java DSL example. If you want to have a signing certificate then use the bean class DefaultXAdESSignatureProperties (see the previous Java DSL example). --> <property name=\"signaturePolicy\" value=\"ExplicitId\" /> <property name=\"sigPolicyId\" value=\"http://www.test.com/policy.pdf\" /> <property name=\"sigPolicyIdDescription\" value=\"factura\" /> <property name=\"signaturePolicyDigestAlgorithm\" value=\"http://www.w3.org/2000/09/xmldsig#sha1\" /> <property name=\"signaturePolicyDigestValue\" value=\"Ohixl6upD6av8N7pEvDABhEL1hM=\" /> <property name=\"signerClaimedRoles\" ref=\"signerClaimedRoles_XMLSigner\" /> <property name=\"dataObjectFormatDescription\" value=\"Factura electronica\" /> <property name=\"dataObjectFormatMimeType\" value=\"text/xml\" /> </bean> <bean class=\"java.util.ArrayList\" id=\"signerClaimedRoles_XMLSigner\"> <constructor-arg> <list> <value>Emisor</value> <value><ClaimedRole xmlns="http://uri.etsi.org/01903/v1.3.2#"><test xmlns="http://test.com/">test</test></ClaimedRole></value> </list> </constructor-arg> </bean>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xmlsecurity-component |
Chapter 15. Config map reference for the Cluster Monitoring Operator | Chapter 15. Config map reference for the Cluster Monitoring Operator 15.1. Cluster Monitoring Operator configuration reference Parts of OpenShift Container Platform cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps. To configure monitoring components, edit the ConfigMap object named cluster-monitoring-config in the openshift-monitoring namespace. These configurations are defined by ClusterMonitoringConfiguration . To configure monitoring components that monitor user-defined projects, edit the ConfigMap object named user-workload-monitoring-config in the openshift-user-workload-monitoring namespace. These configurations are defined by UserWorkloadConfiguration . The configuration file is always defined under the config.yaml key in the config map data. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in this reference are supported for configuration. For more information about supported configurations, see Maintenance and support for monitoring . Configuring cluster monitoring is optional. If a configuration does not exist or is empty, default values are used. If the configuration is invalid YAML data, the Cluster Monitoring Operator stops reconciling the resources and reports Degraded=True in the status conditions of the Operator. 15.2. AdditionalAlertmanagerConfig 15.2.1. Description The AdditionalAlertmanagerConfig resource defines settings for how a component communicates with additional Alertmanager instances. 15.2.2. Required apiVersion Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig , ThanosRulerConfig Property Type Description apiVersion string Defines the API version of Alertmanager. Possible values are v1 or v2 . The default is v2 . bearerToken *v1.SecretKeySelector Defines the secret key reference containing the bearer token to use when authenticating to Alertmanager. pathPrefix string Defines the path prefix to add in front of the push endpoint path. scheme string Defines the URL scheme to use when communicating with Alertmanager instances. Possible values are http or https . The default value is http . staticConfigs []string A list of statically configured Alertmanager endpoints in the form of <hosts>:<port> . timeout *string Defines the timeout value used when sending alerts. tlsConfig TLSConfig Defines the TLS settings to use for Alertmanager connections. 15.3. AlertmanagerMainConfig 15.3.1. Description The AlertmanagerMainConfig resource defines settings for the Alertmanager component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description enabled *bool A Boolean flag that enables or disables the main Alertmanager instance in the openshift-monitoring namespace. The default value is true . enableUserAlertmanagerConfig bool A Boolean flag that enables or disables user-defined namespaces to be selected for AlertmanagerConfig lookups. This setting only applies if the user workload monitoring instance of Alertmanager is not enabled. The default value is false . logLevel string Defines the log level setting for Alertmanager. The possible values are: error , warn , info , debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size, and name. 15.4. AlertmanagerUserWorkloadConfig 15.4.1. Description The AlertmanagerUserWorkloadConfig resource defines the settings for the Alertmanager instance used for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description enabled bool A Boolean flag that enables or disables a dedicated instance of Alertmanager for user-defined alerts in the openshift-user-workload-monitoring namespace. The default value is false . enableAlertmanagerConfig bool A Boolean flag to enable or disable user-defined namespaces to be selected for AlertmanagerConfig lookup. The default value is false . logLevel string Defines the log level setting for Alertmanager for user workload monitoring. The possible values are error , warn , info , and debug . The default value is info . resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 15.5. ClusterMonitoringConfiguration 15.5.1. Description The ClusterMonitoringConfiguration resource defines settings that customize the default platform monitoring stack through the cluster-monitoring-config config map in the openshift-monitoring namespace. Property Type Description alertmanagerMain * AlertmanagerMainConfig AlertmanagerMainConfig defines settings for the Alertmanager component in the openshift-monitoring namespace. enableUserWorkload *bool UserWorkloadEnabled is a Boolean flag that enables monitoring for user-defined projects. k8sPrometheusAdapter * K8sPrometheusAdapter K8sPrometheusAdapter defines settings for the Prometheus Adapter component. kubeStateMetrics * KubeStateMetricsConfig KubeStateMetricsConfig defines settings for the kube-state-metrics agent. prometheusK8s * PrometheusK8sConfig PrometheusK8sConfig defines settings for the Prometheus component. prometheusOperator * PrometheusOperatorConfig PrometheusOperatorConfig defines settings for the Prometheus Operator component. openshiftStateMetrics * OpenShiftStateMetricsConfig OpenShiftMetricsConfig defines settings for the openshift-state-metrics agent. telemeterClient * TelemeterClientConfig TelemeterClientConfig defines settings for the Telemeter Client component. thanosQuerier * ThanosQuerierConfig ThanosQuerierConfig defines settings for the Thanos Querier component. 15.6. DedicatedServiceMonitors 15.6.1. Description You can use the DedicatedServiceMonitors resource to configure dedicated Service Monitors for the Prometheus Adapter Appears in: K8sPrometheusAdapter Property Type Description enabled bool When enabled is set to true , the Cluster Monitoring Operator (CMO) deploys a dedicated Service Monitor that exposes the kubelet /metrics/resource endpoint. This Service Monitor sets honorTimestamps: true and only keeps metrics that are relevant for the pod resource queries of Prometheus Adapter. Additionally, Prometheus Adapter is configured to use these dedicated metrics. Overall, this feature improves the consistency of Prometheus Adapter-based CPU usage measurements used by, for example, the oc adm top pod command or the Horizontal Pod Autoscaler. 15.7. K8sPrometheusAdapter 15.7.1. Description The K8sPrometheusAdapter resource defines settings for the Prometheus Adapter component. Appears in: ClusterMonitoringConfiguration Property Type Description audit *Audit Defines the audit configuration used by the Prometheus Adapter instance. Possible profile values are: Metadata , Request , RequestResponse , and None . The default value is Metadata . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. dedicatedServiceMonitors * DedicatedServiceMonitors Defines dedicated service monitors. 15.8. KubeStateMetricsConfig 15.8.1. Description The KubeStateMetricsConfig resource defines settings for the kube-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.9. OpenShiftStateMetricsConfig 15.9.1. Description The OpenShiftStateMetricsConfig resource defines settings for the openshift-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.10. PrometheusK8sConfig 15.10.1. Description The PrometheusK8sConfig resource defines settings for the Prometheus component. Appears in: ClusterMonitoringConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedBodySizeLimit string Enforces a body size limit for Prometheus scraped metrics. If a scraped target's body response is larger than the limit, the scrape will fail. The following values are valid: an empty value to specify no limit, a numeric value in Prometheus size format (such as 64MB ), or the string automatic , which indicates that the limit will be automatically calculated based on cluster capacity. The default value is empty, which indicates no limit. externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are: error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . By default, no limit is defined. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 15.11. PrometheusOperatorConfig 15.11.1. Description The PrometheusOperatorConfig resource defines settings for the Prometheus Operator component. Appears in: ClusterMonitoringConfiguration , UserWorkloadConfiguration Property Type Description logLevel string Defines the log level settings for Prometheus Operator. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.12. PrometheusRestrictedConfig 15.12.1. Description The PrometheusRestrictedConfig resource defines the settings for the Prometheus component that monitors user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedLabelLimit *uint64 Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelNameLengthLimit *uint64 Specifies a per-scrape limit on the length of a label name for a sample. If the length of a label name exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelValueLengthLimit *uint64 Specifies a per-scrape limit on the length of a label value for a sample. If the length of a label value exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedSampleLimit *uint64 Specifies a global limit on the number of scraped samples that will be accepted. This setting overrides the SampleLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedTargetLimit . Administrators can use this setting to keep the overall number of samples under control. The default value is 0 , which means that no limit is set. enforcedTargetLimit *uint64 Specifies a global limit on the number of scraped targets. This setting overrides the TargetLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedSampleLimit . Administrators can use this setting to keep the overall number of targets under control. The default value is 0 . externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are error , warn , info , and debug . The default setting is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . The default value is nil . tolerations []v1.Toleration Defines tolerations for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the storage class and size of a volume. 15.13. RemoteWriteSpec 15.13.1. Description The RemoteWriteSpec resource defines the settings for remote write storage. 15.13.2. Required url Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig Property Type Description authorization *monv1.SafeAuthorization Defines the authorization settings for remote write storage. basicAuth *monv1.BasicAuth Defines basic authentication settings for the remote write endpoint URL. bearerTokenFile string Defines the file that contains the bearer token for the remote write endpoint. However, because you cannot mount secrets in a pod, in practice you can only reference the token of the service account. headers map[string]string Specifies the custom HTTP headers to be sent along with each remote write request. Headers set by Prometheus cannot be overwritten. metadataConfig *monv1.MetadataConfig Defines settings for sending series metadata to remote write storage. name string Defines the name of the remote write queue. This name is used in metrics and logging to differentiate queues. If specified, this name must be unique. oauth2 *monv1.OAuth2 Defines OAuth2 authentication settings for the remote write endpoint. proxyUrl string Defines an optional proxy URL. queueConfig *monv1.QueueConfig Allows tuning configuration for remote write queue parameters. remoteTimeout string Defines the timeout value for requests to the remote write endpoint. sigv4 *monv1.Sigv4 Defines AWS Signature Version 4 authentication settings. tlsConfig *monv1.SafeTLSConfig Defines TLS authentication settings for the remote write endpoint. url string Defines the URL of the remote write endpoint to which samples will be sent. writeRelabelConfigs []monv1.RelabelConfig Defines the list of remote write relabel configurations. 15.14. TelemeterClientConfig 15.14.1. Description The TelemeterClientConfig resource defines settings for the telemeter-client component. 15.14.2. Required nodeSelector tolerations Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. 15.15. ThanosQuerierConfig 15.15.1. Description The ThanosQuerierConfig resource defines settings for the Thanos Querier component. Appears in: ClusterMonitoringConfiguration Property Type Description enableRequestLogging bool A Boolean flag that enables or disables request logging. The default value is false . logLevel string Defines the log level setting for Thanos Querier. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Querier container. tolerations []v1.Toleration Defines tolerations for the pods. 15.16. ThanosRulerConfig 15.16.1. Description The ThanosRulerConfig resource defines configuration for the Thanos Ruler instance for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The default value is nil . logLevel string Defines the log level setting for Thanos Ruler. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Ruler container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines topology spread constraints for the pods. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Thanos Ruler. Use this setting to configure the storage class and size of a volume. 15.17. TLSConfig 15.17.1. Description The TLSConfig resource configures the settings for TLS connections. 15.17.2. Required insecureSkipVerify Appears in: AdditionalAlertmanagerConfig Property Type Description ca *v1.SecretKeySelector Defines the secret key reference containing the Certificate Authority (CA) to use for the remote host. cert *v1.SecretKeySelector Defines the secret key reference containing the public certificate to use for the remote host. key *v1.SecretKeySelector Defines the secret key reference containing the private key to use for the remote host. serverName string Used to verify the hostname on the returned certificate. insecureSkipVerify bool When set to true , disables the verification of the remote host's certificate and name. 15.18. UserWorkloadConfiguration 15.18.1. Description The UserWorkloadConfiguration resource defines the settings responsible for user-defined projects in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. You can only enable UserWorkloadConfiguration after you have set enableUserWorkload to true in the cluster-monitoring-config config map under the openshift-monitoring namespace. Property Type Description alertmanager * AlertmanagerUserWorkloadConfig Defines the settings for the Alertmanager component in user workload monitoring. prometheus * PrometheusRestrictedConfig Defines the settings for the Prometheus component in user workload monitoring. prometheusOperator * PrometheusOperatorConfig Defines the settings for the Prometheus Operator component in user workload monitoring. thanosRuler * ThanosRulerConfig Defines the settings for the Thanos Ruler component in user workload monitoring. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/config-map-reference-for-the-cluster-monitoring-operator |
Chapter 26. offset | Chapter 26. offset The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation). Data type long | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/offset |
B.94. systemtap | B.94. systemtap B.94.1. RHSA-2010:0894 - Important: systemtap security update Updated systemtap packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. SystemTap is an instrumentation system for systems running the Linux kernel, version 2.6. Developers can write scripts to collect data on the operation of the system. staprun, the SystemTap runtime tool, is used for managing SystemTap kernel modules (for example, loading them). CVE-2010-4170 It was discovered that staprun did not properly sanitize the environment before executing the modprobe command to load an additional kernel module. A local, unprivileged user could use this flaw to escalate their privileges. CVE-2010-4171 It was discovered that staprun did not check if the module to be unloaded was previously loaded by SystemTap. A local, unprivileged user could use this flaw to unload an arbitrary kernel module that was not in use. Note Note: After installing this update, users already in the stapdev group must be added to the stapusr group in order to be able to run the staprun tool. Red Hat would like to thank Tavis Ormandy for reporting these issues. SystemTap users should upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/systemtap |
5.123. java-1.6.0-sun | 5.123. java-1.6.0-sun 5.123.1. RHSA-2013:0236 - Critical: java-1.6.0-sun security update Updated java-1.6.0-sun packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Oracle Java SE version 6 includes the Oracle Java Runtime Environment and the Oracle Java Software Development Kit. Security Fix CVE-2012-1541 , CVE-2012-3213 , CVE-2012-3342 , CVE-2013-0351 , CVE-2013-0409 , CVE-2013-0419 , CVE-2013-0423 , CVE-2013-0424 , CVE-2013-0425 , CVE-2013-0426 , CVE-2013-0427 , CVE-2013-0428 , CVE-2013-0429 , CVE-2013-0430 , CVE-2013-0432 , CVE-2013-0433 , CVE-2013-0434 , CVE-2013-0435 , CVE-2013-0438 , CVE-2013-0440 , CVE-2013-0441 , CVE-2013-0442 , CVE-2013-0443 , CVE-2013-0445 , CVE-2013-0446 , CVE-2013-0450 , CVE-2013-1473 , CVE-2013-1475 , CVE-2013-1476 , CVE-2013-1478 , CVE-2013-1480 , CVE-2013-1481 This update fixes several vulnerabilities in the Oracle Java Runtime Environment and the Oracle Java Software Development Kit. Further information about these flaws can be found on the Oracle Java SE Critical Patch Update Advisory page . All users of java-1.6.0-sun are advised to upgrade to these updated packages, which provide Oracle Java 6 Update 39. All running instances of Oracle Java must be restarted for the update to take effect. 5.123.2. RHSA-2012:1392 - Critical: java-1.6.0-sun security update Updated java-1.6.0-sun packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Oracle Java SE version 6 includes the Oracle Java Runtime Environment and the Oracle Java Software Development Kit. Security Fix CVE-2012-0547 , CVE-2012-1531 , CVE-2012-1532 , CVE-2012-1533 , CVE-2012-3143 , CVE-2012-3159 , CVE-2012-3216 , CVE-2012-4416 , CVE-2012-5068 , CVE-2012-5069 , CVE-2012-5071 , CVE-2012-5072 , CVE-2012-5073 , CVE-2012-5075 , CVE-2012-5077 , CVE-2012-5079 , CVE-2012-5081 , CVE-2012-5083 , CVE-2012-5084 , CVE-2012-5085 , CVE-2012-5086 , CVE-2012-5089 This update fixes several vulnerabilities in the Oracle Java Runtime Environment and the Oracle Java Software Development Kit. Further information about these flaws can be found on the Oracle Java SE Critical Patch Update Advisory and Oracle Security Alert pages . All users of java-1.6.0-sun are advised to upgrade to these updated packages, which provide Oracle Java 6 Update 37. All running instances of Oracle Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/java-1.6.0-sun |
Chapter 17. Troubleshooting Data Grid Server deployments | Chapter 17. Troubleshooting Data Grid Server deployments Gather diagnostic information about Data Grid Server deployments and perform troubleshooting steps to resolve issues. 17.1. Getting diagnostic reports from Data Grid Server Data Grid Server provides aggregated reports in tar.gz archives that contain diagnostic information about server instances and host systems. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files. Procedure Create a CLI connection to Data Grid Server. Use the server report command to download a tar.gz archive: The command responds with the name of the report, as in the following example: Move the tar.gz file to a suitable location on your filesystem. Extract the tar.gz file with any archiving tool. 17.2. Changing Data Grid Server logging configuration at runtime Modify the logging configuration for Data Grid Server at runtime to temporarily adjust logging to troubleshoot issues and perform root cause analysis. Modifying the logging configuration through the CLI is a runtime-only operation, which means that changes: Are not saved to the log4j2.xml file. Restarting server nodes or the entire cluster resets the logging configuration to the default properties in the log4j2.xml file. Apply only to the nodes in the cluster when you invoke the CLI. Nodes that join the cluster after you change the logging configuration use the default properties. Procedure Create a CLI connection to Data Grid Server. Use the logging to make the required adjustments. List all appenders defined on the server: The command provides a JSON response such as the following: { "STDOUT" : { "name" : "STDOUT" }, "JSON-FILE" : { "name" : "JSON-FILE" }, "HR-ACCESS-FILE" : { "name" : "HR-ACCESS-FILE" }, "FILE" : { "name" : "FILE" }, "REST-ACCESS-FILE" : { "name" : "REST-ACCESS-FILE" } } List all logger configurations defined on the server: The command provides a JSON response such as the following: [ { "name" : "", "level" : "INFO", "appenders" : [ "STDOUT", "FILE" ] }, { "name" : "org.infinispan.HOTROD_ACCESS_LOG", "level" : "INFO", "appenders" : [ "HR-ACCESS-FILE" ] }, { "name" : "com.arjuna", "level" : "WARN", "appenders" : [ ] }, { "name" : "org.infinispan.REST_ACCESS_LOG", "level" : "INFO", "appenders" : [ "REST-ACCESS-FILE" ] } ] Add and modify logger configurations with the set subcommand For example, the following command sets the logging level for the org.infinispan package to DEBUG : Remove existing logger configurations with the remove subcommand. For example, the following command removes the org.infinispan logger configuration, which means the root configuration is used instead: 17.3. Gathering resource statistics from the CLI You can inspect server-collected statistics for some Data Grid Server resources with the stats command. Use the stats command either from the context of a resource that provides statistics (containers, caches) or with a path to such a resource: { "statistics_enabled" : true, "number_of_entries" : 0, "hit_ratio" : 0.0, "read_write_ratio" : 0.0, "time_since_start" : 0, "time_since_reset" : 49, "current_number_of_entries" : 0, "current_number_of_entries_in_memory" : 0, "total_number_of_entries" : 0, "off_heap_memory_used" : 0, "data_memory_used" : 0, "stores" : 0, "retrievals" : 0, "hits" : 0, "misses" : 0, "remove_hits" : 0, "remove_misses" : 0, "evictions" : 0, "average_read_time" : 0, "average_read_time_nanos" : 0, "average_write_time" : 0, "average_write_time_nanos" : 0, "average_remove_time" : 0, "average_remove_time_nanos" : 0, "required_minimum_number_of_nodes" : -1 } { "time_since_start" : -1, "time_since_reset" : -1, "current_number_of_entries" : -1, "current_number_of_entries_in_memory" : -1, "total_number_of_entries" : -1, "off_heap_memory_used" : -1, "data_memory_used" : -1, "stores" : -1, "retrievals" : -1, "hits" : -1, "misses" : -1, "remove_hits" : -1, "remove_misses" : -1, "evictions" : -1, "average_read_time" : -1, "average_read_time_nanos" : -1, "average_write_time" : -1, "average_write_time_nanos" : -1, "average_remove_time" : -1, "average_remove_time_nanos" : -1, "required_minimum_number_of_nodes" : -1 } 17.4. Accessing cluster health via REST Get Data Grid cluster health via the REST API. Procedure Invoke a GET request to retrieve cluster health. Data Grid responds with a JSON document such as the following: { "cluster_health":{ "cluster_name":"ISPN", "health_status":"HEALTHY", "number_of_nodes":2, "node_names":[ "NodeA-36229", "NodeB-28703" ] }, "cache_health":[ { "status":"HEALTHY", "cache_name":"___protobuf_metadata" }, { "status":"HEALTHY", "cache_name":"cache2" }, { "status":"HEALTHY", "cache_name":"mycache" }, { "status":"HEALTHY", "cache_name":"cache1" } ] } Tip Get Cache Manager status as follows: Reference See the REST v2 (version 2) API documentation for more information. 17.5. Accessing cluster health via JMX Retrieve Data Grid cluster health statistics via JMX. Procedure Connect to Data Grid server using any JMX capable tool such as JConsole and navigate to the following object: Select available MBeans to retrieve cluster health statistics. | [
"server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'",
"Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'",
"logging list-appenders",
"{ \"STDOUT\" : { \"name\" : \"STDOUT\" }, \"JSON-FILE\" : { \"name\" : \"JSON-FILE\" }, \"HR-ACCESS-FILE\" : { \"name\" : \"HR-ACCESS-FILE\" }, \"FILE\" : { \"name\" : \"FILE\" }, \"REST-ACCESS-FILE\" : { \"name\" : \"REST-ACCESS-FILE\" } }",
"logging list-loggers",
"[ { \"name\" : \"\", \"level\" : \"INFO\", \"appenders\" : [ \"STDOUT\", \"FILE\" ] }, { \"name\" : \"org.infinispan.HOTROD_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"HR-ACCESS-FILE\" ] }, { \"name\" : \"com.arjuna\", \"level\" : \"WARN\", \"appenders\" : [ ] }, { \"name\" : \"org.infinispan.REST_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"REST-ACCESS-FILE\" ] } ]",
"logging set --level=DEBUG org.infinispan",
"logging remove org.infinispan",
"stats",
"{ \"statistics_enabled\" : true, \"number_of_entries\" : 0, \"hit_ratio\" : 0.0, \"read_write_ratio\" : 0.0, \"time_since_start\" : 0, \"time_since_reset\" : 49, \"current_number_of_entries\" : 0, \"current_number_of_entries_in_memory\" : 0, \"total_number_of_entries\" : 0, \"off_heap_memory_used\" : 0, \"data_memory_used\" : 0, \"stores\" : 0, \"retrievals\" : 0, \"hits\" : 0, \"misses\" : 0, \"remove_hits\" : 0, \"remove_misses\" : 0, \"evictions\" : 0, \"average_read_time\" : 0, \"average_read_time_nanos\" : 0, \"average_write_time\" : 0, \"average_write_time_nanos\" : 0, \"average_remove_time\" : 0, \"average_remove_time_nanos\" : 0, \"required_minimum_number_of_nodes\" : -1 }",
"stats /containers/default/caches/mycache",
"{ \"time_since_start\" : -1, \"time_since_reset\" : -1, \"current_number_of_entries\" : -1, \"current_number_of_entries_in_memory\" : -1, \"total_number_of_entries\" : -1, \"off_heap_memory_used\" : -1, \"data_memory_used\" : -1, \"stores\" : -1, \"retrievals\" : -1, \"hits\" : -1, \"misses\" : -1, \"remove_hits\" : -1, \"remove_misses\" : -1, \"evictions\" : -1, \"average_read_time\" : -1, \"average_read_time_nanos\" : -1, \"average_write_time\" : -1, \"average_write_time_nanos\" : -1, \"average_remove_time\" : -1, \"average_remove_time_nanos\" : -1, \"required_minimum_number_of_nodes\" : -1 }",
"GET /rest/v2/cache-managers/{cacheManagerName}/health",
"{ \"cluster_health\":{ \"cluster_name\":\"ISPN\", \"health_status\":\"HEALTHY\", \"number_of_nodes\":2, \"node_names\":[ \"NodeA-36229\", \"NodeB-28703\" ] }, \"cache_health\":[ { \"status\":\"HEALTHY\", \"cache_name\":\"___protobuf_metadata\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache2\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"mycache\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache1\" } ] }",
"GET /rest/v2/cache-managers/{cacheManagerName}/health/status",
"org.infinispan:type=CacheManager,name=\"default\",component=CacheContainerHealth"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/tshoot_server |
Chapter 49. JQ | Chapter 49. JQ Since Camel 3.18 Camel supports JQ to allow using Expression or Predicate on JSON messages. 49.1. Dependencies When using jq with Red Hat build of Camel Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jq-starter</artifactId> </dependency> 49.2. JQ Options The JQ language supports 4 options, which are listed below. Name Default Java Type Description headerName String Name of header to use as input, instead of the message body It has as higher precedent than the propertyName if both are set. propertyName String Name of property to use as input, instead of the message body. It has a lower precedent than the headerName if both are set. resultType String Sets the class of the result type (type from output). trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 49.3. Examples For example, you can use JQ in a Predicate with the Content Based Router EIP . from("queue:books.new") .choice() .when().jq(".store.book.price < 10)") .to("jms:queue:book.cheap") .when().jq(".store.book.price < 30)") .to("jms:queue:book.average") .otherwise() .to("jms:queue:book.expensive"); 49.4. Message body types Camel JQ leverages camel-jackson for type conversion. To enable camel-jackson POJO type conversion, refer to the Camel Jackson documentation. 49.5. Using header as input By default, JQ uses the message body as the input source. However, you can also use a header as input by specifying the headerName option. For example to count the number of books from a JSON document that was stored in a header named books you can do: from("direct:start") .setHeader("numberOfBooks") .jq(".store.books | length", int.class, "books") .to("mock:result"); 49.6. Camel supplied JQ Functions The camel-jq adds the following functions: header - Allow to access the Message header in a JQ expression. For example, to set the property foo with the value from the Message header `MyHeader': from("direct:start") .transform() .jq(".foo = header(\"MyHeader\")") .to("mock:result"); 49.7. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.language.jq.enabled Whether to enable auto configuration of the jq language. This is enabled by default. Boolean camel.language.jq.header-name Name of header to use as input, instead of the message body It has as higher precedent than the propertyName if both are set. String camel.language.jq.property-name Name of property to use as input, instead of the message body. It has a lower precedent than the headerName if both are set. String camel.language.jq.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jq-starter</artifactId> </dependency>",
"from(\"queue:books.new\") .choice() .when().jq(\".store.book.price < 10)\") .to(\"jms:queue:book.cheap\") .when().jq(\".store.book.price < 30)\") .to(\"jms:queue:book.average\") .otherwise() .to(\"jms:queue:book.expensive\");",
"from(\"direct:start\") .setHeader(\"numberOfBooks\") .jq(\".store.books | length\", int.class, \"books\") .to(\"mock:result\");",
"from(\"direct:start\") .transform() .jq(\".foo = header(\\\"MyHeader\\\")\") .to(\"mock:result\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jq-language-component-starter |
Viewing reports about your Ansible automation environment | Viewing reports about your Ansible automation environment Red Hat Ansible Automation Platform 2.3 Use the reports feature within Automation Analytics to generate an overview report to monitor your automation environment. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/viewing_reports_about_your_ansible_automation_environment/index |
Chapter 15. Jakarta Authorization | Chapter 15. Jakarta Authorization 15.1. About Jakarta Authorization Jakarta Authorization is a standard which defines a contract between containers and authorization service providers, which results in the implementation of providers for use by containers. For details about the specifications, see Jakarta Authorization specification . JBoss EAP implements support for Jakarta Authorization within the security functionality of the security subsystem. 15.2. Configure Jakarta Authorization Security You can configure Jakarta Authorization by configuring your security domain with the correct module, and then modifying your jboss-web.xml to include the required parameters. Add Jakarta Authentication to the Security Domain To add Jakarta Authorization support to the security domain, add the Jakarta Authorization authorization policy to the authorization stack of the security domain, with the required flag set. The following is an example of a security domain with Jakarta Authorization support. However, it is recommended to configure the security domain from the management console or the management CLI, rather than directly modifying the XML. Example: Security Domain with Jakarta Authentication <security-domain name="jacc" cache-type="default"> <authentication> <login-module code="UsersRoles" flag="required"> </login-module> </authentication> <authorization> <policy-module code="JACC" flag="required"/> </authorization> </security-domain> Configure a Web Application to Use Jakarta Authentication The jboss-web.xml file is located in the WEB-INF/ directory of your deployment, and contains overrides and additional JBoss-specific configuration for the web container. To use your Jakarta Authorization-enabled security domain, you need to include the <security-domain> element, and also set the <use-jboss-authorization> element to true . The following XML is configured to use the Jakarta Authorization security domain above. Example: Use the Jakarta Authentication Security Domain <jboss-web> <security-domain>jacc</security-domain> <use-jboss-authorization>true</use-jboss-authorization> </jboss-web> Configure an Jakarta Enterprise Beans Application to Use Jakarta Authentication Configuring Jakarta Enterprise Beans to use a security domain and to use Jakarta Authorization differs from web applications. For an Jakarta Enterprise Beans, you can declare method permissions on a method or group of methods, in the ejb-jar.xml descriptor. Within the <ejb-jar> element, any child <method-permission> elements contain information about Jakarta Authorization roles. See the example configuration below for details. The EJBMethodPermission class is part of the Jakarta EE API, and is documented at Class EJBMethodPermission . Example: Jakarta Authentication Method Permissions in an Jakarta Enterprise Beans <ejb-jar> <assembly-descriptor> <method-permission> <description>The employee and temp-employee roles can access any method of the EmployeeService bean </description> <role-name>employee</role-name> <role-name>temp-employee</role-name> <method> <ejb-name>EmployeeService</ejb-name> <method-name>*</method-name> </method> </method-permission> </assembly-descriptor> </ejb-jar> You can also constrain the authentication and authorization mechanisms for an Jakarta Enterprise Beans by using a security domain, just as you can do for a web application. Security domains are declared in the jboss-ejb3.xml descriptor, in the <security> child element. In addition to the security domain, you can also specify the <run-as-principal> , which changes the principal that the Jakarta Enterprise Beans runs as. Example: Security Domain Declaration in an Jakarta Enterprise Beans <ejb-jar> <assembly-descriptor> <security> <ejb-name>*</ejb-name> <security-domain>myDomain</security-domain> <run-as-principal>myPrincipal</run-as-principal> </security> </assembly-descriptor> </ejb-jar> Enabling Jakarta Authorization Using the elytron Subsystem Disable Jakarta Authentication in the Legacy Security Subsystem By default, the application server uses the legacy security subsystem to configure the Jakarta Authorization policy provider and factory. The default configuration maps to implementations from PicketBox. In order to use Elytron to manage Jakarta Authorization configuration, or any other policy you want to install to the application server, you must first disable Jakarta Authorization in the legacy security subsystem. For that, you can use the following management CLI command: Failure to do so can result in the following error in the server log: MSC000004: Failure during stop of service org.wildfly.security.policy: java.lang.StackOverflowError . Define a Jakarta Authentication Policy Provider The elytron subsystem provides a built-in policy provider based on Jakarta Authorization specification. To create the policy provider you can execute the following management CLI command: Enable Jakarta Authentication to a Web Deployment Once a Jakarta Authorization policy provider is defined, you can enable Jakarta Authorization for web deployments by executing the following command: The command above defines a default security domain for applications, if none is provided in the jboss-web.xml file. In case you already have a application-security-domain defined and just want to enable Jakarta Authorization you can execute the following command: Enable Jakarta Authentication to an Jakarta Enterprise Beans Deployment Once a Jakarta Authorization policy provider is defined, you can enable Jakarta Authorization for Jakarta Enterprise Beans deployments by executing the following command: The command above defines a default security domain for Jakarta Enterprise Beans. In case you already have a application-security-domain defined and just want to enable Jakarta Authorization you can execute a command as follows: Creating a Custom Elytron Policy Provider A custom policy provider is used when you need a custom java.security.Policy , like when you want to integrate with some external authorization service in order to check permissions. To create a custom policy provider, you will need to implement the java.security.Policy , create and plug in a custom module with the implementation and use the implementation from the module in the elytron subsystem. For more information, see the Policy Provider Properties . Note In most cases, you can use the Jakarta Authorization policy provider as it is expected to be part of any Jakarta EE compliant application server. | [
"<security-domain name=\"jacc\" cache-type=\"default\"> <authentication> <login-module code=\"UsersRoles\" flag=\"required\"> </login-module> </authentication> <authorization> <policy-module code=\"JACC\" flag=\"required\"/> </authorization> </security-domain>",
"<jboss-web> <security-domain>jacc</security-domain> <use-jboss-authorization>true</use-jboss-authorization> </jboss-web>",
"<ejb-jar> <assembly-descriptor> <method-permission> <description>The employee and temp-employee roles can access any method of the EmployeeService bean </description> <role-name>employee</role-name> <role-name>temp-employee</role-name> <method> <ejb-name>EmployeeService</ejb-name> <method-name>*</method-name> </method> </method-permission> </assembly-descriptor> </ejb-jar>",
"<ejb-jar> <assembly-descriptor> <security> <ejb-name>*</ejb-name> <security-domain>myDomain</security-domain> <run-as-principal>myPrincipal</run-as-principal> </security> </assembly-descriptor> </ejb-jar>",
"/subsystem=security:write-attribute(name=initialize-jacc, value=false)",
"/subsystem=elytron/policy=jacc:add(jacc-policy={}) reload",
"/subsystem=undertow/application-security-domain=other:add(security-domain=ApplicationDomain,enable-jacc=true)",
"/subsystem=undertow/application-security-domain=my-security-domain:write-attribute(name=enable-jacc,value=true)",
"/subsystem=ejb3/application-security-domain=other:add(security-domain=ApplicationDomain,enable-jacc=true)",
"/subsystem=ejb3/application-security-domain=my-security-domain:write-attribute(name=enable-jacc,value=true)",
"/subsystem=elytron/policy=policy-provider-a:add(custom-policy={class-name=MyPolicyProviderA, module=x.y.z})"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/jakarta_authorization |
Chapter 10. KVM Paravirtualized (virtio) Drivers | Chapter 10. KVM Paravirtualized (virtio) Drivers Paravirtualized drivers enhance the performance of guests, decreasing guest I/O latency and increasing throughput to near bare-metal levels. It is recommended to use the paravirtualized drivers for fully virtualized guests running I/O heavy tasks and applications. Virtio drivers are KVM's paravirtualized device drivers, available for Windows guest virtual machines running on KVM hosts. These drivers are included in the virtio package. The virtio package supports block (storage) devices and network interface controllers. The KVM virtio drivers are automatically loaded and installed on the following: Red Hat Enterprise Linux 4.8 and newer Red Hat Enterprise Linux 5.3 and newer Red Hat Enterprise Linux 6 and newer Red Hat Enterprise Linux 7 and newer Some versions of Linux based on the 2.6.27 kernel or newer kernel versions. Versions of Red Hat Enterprise Linux in the list above detect and install the drivers, additional installation steps are not required. In Red Hat Enterprise Linux 3 (3.9 and above), manual installation is required. Note PCI devices are limited by the virtualized system architecture. Refer to Section 4.1, "KVM Restrictions" for additional limitations when using assigned devices. Using KVM virtio drivers, the following Microsoft Windows versions are expected to run similarly to bare-metal-based systems. Windows Server 2003 (32-bit and 64-bit versions) Windows Server 2008 (32-bit and 64-bit versions) Windows Server 2008 R2 (64-bit only) Windows 7 (32-bit and 64-bit versions) Windows Server 2012 (64-bit only) Windows Server 2012 R2 (64-bit only) Windows 8 (32-bit and 64-bit versions) Windows 8.1 (32-bit and 64-bit versions) 10.1. Installing the KVM Windows virtio Drivers This section covers the installation process for the KVM Windows virtio drivers. The KVM virtio drivers can be loaded during the Windows installation or installed after the guest is installed. You can install the virtio drivers on a guest virtual machine using one of the following methods: hosting the installation files on a network accessible to the virtual machine using a virtualized CD-ROM device of the driver installation disk .iso file using a USB drive, by mounting the same (provided) .ISO file that you would use for the CD-ROM using a virtualized floppy device to install the drivers during boot time (required and recommended only for XP/2003) This guide describes installation from the paravirtualized installer disk as a virtualized CD-ROM device. Download the drivers The virtio-win package contains the virtio block and network drivers for all supported Windows guest virtual machines. Download and install the virtio-win package on the host with the yum command. The list of virtio-win packages that are supported on Windows operating systems, and the current certified package version, can be found at the following URL: windowsservercatalog.com . Note that the Red Hat Virtualization Hypervisor and Red Hat Enterprise Linux are created on the same code base so the drivers for the same version (for example, Red Hat Virtualization Hypervisor 3.3 and Red Hat Enterprise Linux 6.5) are supported for both environments. The virtio-win package installs a CD-ROM image, virtio-win.iso , in the /usr/share/virtio-win/ directory. Install the virtio drivers When booting a Windows guest that uses virtio-win devices, the relevant virtio-win device drivers must already be installed on this guest. The virtio-win drivers are not provided as inbox drivers in Microsoft's Windows installation kit, so installation of a Windows guest on a virtio-win storage device (viostor/virtio-scsi) requires that you provide the appropriate driver during the installation, either directly from the virtio-win.iso or from the supplied Virtual Floppy image virtio-win <version> .vfd . | [
"yum install virtio-win"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-para_virtualized_drivers |
5.6.3.3. Data Migration | 5.6.3.3. Data Migration Most seasoned system administrators would be impressed by LVM capabilities so far, but they would also be asking themselves this question: What happens if one of the drives making up a logical volume starts to fail? The good news is that most LVM implementations include the ability to migrate data off of a particular physical drive. For this to work, there must be sufficient reserve capacity left to absorb the loss of the failing drive. Once the migration is complete, the failing drive can then be replaced and added back into the available storage pool. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-adv-lvm-migration |
4.324. thunderbird | 4.324. thunderbird 4.324.1. RHSA-2012:0080 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2011-3659 A use-after-free flaw was found in the way Thunderbird removed nsDOMAttribute child nodes. In certain circumstances, due to the premature notification of AttributeChildRemoved, a malicious script could possibly use this flaw to cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0442 Several flaws were found in the processing of malformed content. An HTML mail message containing malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0449 A flaw was found in the way Thunderbird parsed certain Scalable Vector Graphics (SVG) image files that contained eXtensible Style Sheet Language Transformations (XSLT). An HTML mail message containing a malicious SVG image file could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2011-3670 The same-origin policy in Thunderbird treated http://example.com and http://[example.com] as interchangeable. A malicious script could possibly use this flaw to gain access to sensitive information (such as a client's IP and user e-mail address, or httpOnly cookies) that may be included in HTTP proxy error replies, generated in response to invalid URLs using square brackets. Note: The CVE-2011-3659 and CVE-2011-3670 issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. It could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. For technical details regarding these flaws, refer to the Mozilla security advisories for Thunderbird 3.1.18.: http://www.mozilla.org/security/known-vulnerabilities/thunderbird31.html#thunderbird3.1.18 All Thunderbird users should upgrade to these updated packages, which contain Thunderbird version 3.1.18, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 4.324.2. RHSA-2012:0140 - Critical: thunderbird security update An updated thunderbird package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fix CVE-2011-3026 A heap-based buffer overflow flaw was found in the way Thunderbird handled PNG (Portable Network Graphics) images. An HTML mail message or remote content containing a specially-crafted PNG image could cause Thunderbird to crash or, possibly, execute arbitrary code with the privileges of the user running Thunderbird. All Thunderbird users should upgrade to this updated package, which corrects this issue. After installing the update, Thunderbird must be restarted for the changes to take effect. 4.324.3. RHSA-2012:0388 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2012-0461 , CVE-2012-0462 , CVE-2012-0464 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0456 , CVE-2012-0457 Two flaws were found in the way Thunderbird parsed certain Scalable Vector Graphics (SVG) image files. An HTML mail message containing a malicious SVG image file could cause an information leak, or cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0455 A flaw could allow malicious content to bypass intended restrictions, possibly leading to a cross-site scripting (XSS) attack if a user were tricked into dropping a "javascript:" link onto a frame. CVE-2012-0458 It was found that the home page could be set to a "javascript:" link. If a user were tricked into setting such a home page by dragging a link to the home button, it could cause Firefox to repeatedly crash, eventually leading to arbitrary code execution with the privileges of the user running Firefox. A similar flaw was found and fixed in Thunderbird. CVE-2012-0459 A flaw was found in the way Thunderbird parsed certain, remote content containing "cssText". Malicious, remote content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0460 It was found that by using the DOM fullscreen API, untrusted content could bypass the mozRequestFullscreen security protections. Malicious content could exploit this API flaw to cause user interface spoofing. CVE-2012-0451 A flaw was found in the way Thunderbird handled content with multiple Content Security Policy (CSP) headers. This could lead to a cross-site scripting attack if used in conjunction with a website that has a header injection flaw. Note All issues except CVE-2012-0456 and CVE-2012-0457 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. It could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.3 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 4.324.4. RHSA-2012:0516 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2011-3062 A flaw was found in Sanitiser for OpenType (OTS), used by Thunderbird to help prevent potential exploits in malformed OpenType fonts. Malicious content could cause Thunderbird to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0467 , CVE-2012-0468 , CVE-2012-0469 Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0470 Content containing a malicious Scalable Vector Graphics (SVG) image file could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0472 A flaw was found in the way Thunderbird used its embedded Cairo library to render certain fonts. Malicious content could cause Thunderbird to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0478 A flaw was found in the way Thunderbird rendered certain images using WebGL. Malicious content could cause Thunderbird to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-0471 A cross-site scripting (XSS) flaw was found in the way Thunderbird handled certain multibyte character sets. Malicious content could cause Thunderbird to run JavaScript code with the permissions of different content. CVE-2012-0473 A flaw was found in the way Thunderbird rendered certain graphics using WebGL. Malicious content could cause Thunderbird to crash. CVE-2012-0474 A flaw in the built-in feed reader in Thunderbird allowed the Website field to display the address of different content than the content the user was visiting. An attacker could use this flaw to conceal a malicious URL, possibly tricking a user into believing they are viewing a trusted site, or allowing scripts to be loaded from the attacker's site, possibly leading to cross-site scripting (XSS) attacks. CVE-2012-0477 A flaw was found in the way Thunderbird decoded the ISO-2022-KR and ISO-2022-CN character sets. Malicious content could cause Thunderbird to run JavaScript code with the permissions of different content. CVE-2012-0479 A flaw was found in the way the built-in feed reader in Thunderbird handled RSS and Atom feeds. Invalid RSS or Atom content loaded over HTTPS caused Thunderbird to display the address of said content, but not the content. The content continued to be displayed. An attacker could use this flaw to perform phishing attacks, or trick users into thinking they are visiting the site reported by the Website field, when the page is actually content controlled by an attacker. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Mateusz Jurczyk of the Google Security Team as the original reporter of CVE-2011-3062 ; Aki Helin from OUSPG as the original reporter of CVE-2012-0469 ; Atte Kettunen from OUSPG as the original reporter of CVE-2012-0470 ; wushi of team509 via iDefense as the original reporter of CVE-2012-0472 ; Ms2ger as the original reporter of CVE-2012-0478 ; Anne van Kesteren of Opera Software as the original reporter of CVE-2012-0471 ; Matias Juntunen as the original reporter of CVE-2012-0473 ; Jordi Chancel and Eddy Bordi, and Chris McGowen as the original reporters of CVE-2012-0474 ; Masato Kinugawa as the original reporter of CVE-2012-0477 ; and Jeroen van der Gun as the original reporter of CVE-2012-0479 . Note All issues except CVE-2012-0470 , CVE-2012-0472 , and CVE-2011-3062 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. It could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 4.324.5. RHSA-2012:0715 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2011-3101 , CVE-2012-1937 , CVE-2012-1938 , CVE-2012-1939 , CVE-2012-1940 , CVE-2012-1941 , CVE-2012-1946 , CVE-2012-1947 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. Note Note: CVE-2011-3101 only affected users of certain NVIDIA display drivers with graphics cards that have hardware acceleration enabled. CVE-2012-1944 It was found that the Content Security Policy (CSP) implementation in Thunderbird no longer blocked Thunderbird inline event handlers. Malicious content could possibly bypass intended restrictions if that content relied on CSP to protect against flaws such as cross-site scripting (XSS). CVE-2012-1945 If a web server hosted content that is stored on a Microsoft Windows share, or a Samba share, loading such content with Thunderbird could result in Windows shortcut files (.lnk) in the same share also being loaded. An attacker could use this flaw to view the contents of local files and directories on the victim's system. This issue also affected users opening content from Microsoft Windows shares, or Samba shares, that are mounted on their systems. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Ken Russell of Google as the original reporter of CVE-2011-3101 ; Igor Bukanov, Olli Pettay, Boris Zbarsky, and Jesse Ruderman as the original reporters of CVE-2012-1937 ; Jesse Ruderman, Igor Bukanov, Bill McCloskey, Christian Holler, Andrew McCreight, and Brian Bondy as the original reporters of CVE-2012-1938 ; Christian Holler as the original reporter of CVE-2012-1939 ; security researcher Abhishek Arya of Google as the original reporter of CVE-2012-1940 , CVE-2012-1941 , and CVE-2012-1947 ; security researcher Arthur Gerkis as the original reporter of CVE-2012-1946 ; security researcher Adam Barth as the original reporter of CVE-2012-1944 ; and security researcher Paul Stone as the original reporter of CVE-2012-1945 . Note None of the issues in this advisory can be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.5 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/thunderbird |
Chapter 6. Performing operations with the Shared File Systems service (manila) | Chapter 6. Performing operations with the Shared File Systems service (manila) You can create and manage shares from the available share types in the Shared File Systems service (manila). Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you, and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. 6.1. Listing share types You must specify a share type when you create a share, and you can only create shares that match the available share types. The configured share types define the type of service that the Shared File Systems service scheduler uses to make scheduling decisions and that drivers use to control share creation. Procedure List the available share types: The command output lists the name and ID of the available share types. 6.2. Creating NFS, CephFS, or CIFS shares You can create CephFS-NFS, native CephFS, or CIFS shares to read and write data. When you create a share, you must specify the share protocol and the size of the share in gigabytes. You can also include the share-type , share-network and name command options: In the command example, replace the following values: Value Description Required or optional <share_type> Applies settings associated with the specified share type Optional. If you do not specify a share type, the default share type is used. <share_network> The name of the share network Required if the share type has driver_handles_share_servers set to true . Unsupported if the share type has driver_handles_share_servers set to false . Unsupported for CephFS-NFS and native CephFS. These protocols do not support share types that have driver_handles_share_servers set to true . <share_name> The name of the share Optional. Shares are not required to have a name, and the name does not need to be unique. <share_protocol> The share protocol you want to use For CephFS-NFS, replace <share_protocol> with nfs . For native CephFS, replace <share_protocol> with cephfs . For other storage back ends that support NFS or CIFS protocols, for example, NetApp or Dell EMC storage back ends, replace <share_protocol> with nfs or cifs . <GB> The size of the share in gigabytes Required. 6.2.1. Creating NFS or CIFS shares with DHSS=true When the share type extra specification, driver_handles_share_servers is set to true , you can add your own security services to a share network to create and export NFS or CIFS shares. The native CephFS protocol does not support share networks. To add a security service, you must create a share network first. If you are creating CIFS shares, you must also create a security service resource to represent your Active Directory server. You then associate the security service to the share network. If you are creating NFS shares, you do not require a security service unless you want to use Kerberos or LDAP authorization on your shares. Procedure Create a share network: Replace <network_name> with the share network name that you want to use for your NFS or CIFS shares. Replace the neutron-net-id and neutron-subnet-id with the correct values for your share network. Create a security service resource to represent your Active Directory server: Replace the values in angle brackets <> with the correct details for your security service resource. Associate the security service resource to the share network: Create an NFS or CIFS share: 10 GB NFS example: 20 GB CIFS example: Replace the values in angle brackets <> with the correct details for your NFS or CIFS share. 6.2.2. Creating NFS, CephFS, or CIFS shares with DHSS=false When the share type extra specification, driver_handles_share_servers , is set to`false`, you cannot use custom security services because security services have been configured directly on the storage system. Because CIFS shares require an Active Directory server along with the storage system to manage access control, your administrator must pre-create an Active Directory server and associate it with the storage system to use CIFS shares. When DHSS=false, you can create shares without using the share-network command option because the shared storage network is pre-configured. Procedure Create an NFS, native CephFS, or CIFS share when DHSS=false. These examples specify a name , but they do not specify the share-type or share-network . They use the default share type and the configured shared storage network: Create a 10 GB NFS share named share-01 . Create a 15 GB native CephFS share named share-02 : Create a 20 GB CIFS share named share-03 : 6.3. Listing shares and exporting information To verify that you have successfully created NFS, CephFS, or CIFS shares in the Shared File Systems service (manila), you can list the shares and view their export locations and parameters. Procedure List the shares: View the export locations of the share: Replace <share> with either the share name or the share ID. View the parameters for the share: Replace <share_id> with the share ID. Note You use the export location to mount the share, as described in Section 6.8.2, "Mounting NFS, native CephFS, or CIFS shares" . 6.4. Creating a snapshot of data on a shared file system A snapshot is a read-only, point-in-time copy of data on a share. You can use a snapshot to recover data lost through accidental data deletion or file system corruption. Snapshots are more space efficient than backups, and they do not impact the performance of the Shared File Systems service (manila). Prerequisites The snapshot_support parameter must equal true on the parent share. You can run the following command to verify: Procedure Create a snapshot of a share: Replace <share> with the name or ID of the share for which you want to create a snapshot. Optional: Replace <snapshot_name> with the name of the snapshot. Confirm that you created the snapshot: Replace <share> with the ID of the share from which you created the snapshot. 6.4.1. Creating a share from a snapshot You can create a share from a snapshot. If the parent share that the snapshot was created from has a share type of driver_handles_share_servers set to true , the new share is created on the same share network as the parent, and you cannot change this share network for the new share. Prerequisites The create_share_from_snapshot_support share attribute is set to true . The status attribute of the snapshot is set to available . Procedure Retrieve the ID of the share snapshot that contains the data that you require for your new share: A share created from a snapshot can be larger, but not smaller, than the snapshot. Retrieve the size of the snapshot: Replace <snapshot_id> with the ID of the snapshot you want to use to create a share. Create a share from a snapshot: Replace <share_protocol> with the protocol, such as NFS. Replace <size> with the size of the share to be created, in GiB. Replace <name> with the name of the new share. List the shares to confirm that the share was created successfully: View the properties of the new share: Verification After you create a snapshot, confirm that the snapshot is available. List the snapshots to confirm that they are available: 6.4.2. Deleting a snapshot When you create snapshots of a share, you cannot delete the share until you delete all of the snapshots created from that share. Procedure Identify the snapshot you want to delete and retrieve its ID: Delete the snapshot: Replace <snapshot> with the name or ID of the snapshot you want to delete. Note Repeat this step for each snapshot you want to delete. After you delete the snapshot, run the following command to confirm that you deleted the snapshot: 6.5. Connecting to a shared network to access shares When the driver_handles_share_servers parameter (DHSS) equals false , shares are exported to the shared provider network that your administrator has made available. You must connect your client, such as a Compute instance, to the shared provider network to access your shares. In the following example procedure, the shared provider network is called StorageNFS. StorageNFS is configured when the Shared File Systems service (manila) is deployed with a CephFS-NFS back end. Follow similar steps to connect to the available network in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. Note The steps in the following example procedure use IPv4 addressing, but the steps are identical for IPv6. Procedure Create a security group for the StorageNFS port that allows packets to egress the port but does not allow ingress packets from unestablished connections: Create a port on the StorageNFS network with security enforced by the no-ingress security group. Note In the following example, the StorageNFS subnet on the StorageNFS network assigns IP address 198.51.100.160 to nfs-port0 . Add nfs-port0 to a Compute instance. In addition to its private and floating addresses, the Compute instance is assigned a port with the IP address 198.51.100.160 on the StorageNFS network. You can use this IP address to mount NFS shares when access is granted to that address for the shares. Note You might need to adjust the networking configuration on the Compute instance, and then restart the services for the Compute instance to activate an interface with this address. 6.6. Configuring an IPv6 interface between the network and an instance When the shared network to which shares are exported uses IPv6 addressing, you might experience an issue with DHCPv6 on the secondary interface. If this issue occurs, configure an IPv6 interface manually on the instance. Prerequisites Connection to a shared network to access shares Procedure Log in to the instance. Configure the IPv6 interface address: Activate the interface: Ping the IPv6 address in the export location of the share to test interface connectivity: Alternatively, verify that you can reach the NFS server through Telnet: 6.7. Granting share access for end-user clients Before you mount a share on a client, such as a Compute instance, you grant end-user clients access to the share so that users can read data from and write data to the share. The type of access depends on the protocol of the share: For CIFS shares, use the CIFS user or group name. For NFS shares, use the IP address of the Compute instance where you plan to mount the share. For native CephFS shares, use Ceph client usernames for cephx authentication. You can grant access to the share by using a command similar to the following command: Replace <share> with the share name or ID of the share you created. Replace <access_type> with the type of access you want to grant to the share, for example, user for CIFS, ip for NFS, or cephfx for native CephFS. Optional: Replace <access_level> with ro for read-only access. The default value is rw for read-write access. Replace client_identifier with the IP address of the instance for NFS, user or group name for CIFS, or Ceph client username for native CephFS. For CIFS and native CephFS, you can use the same client_identifier across multiple clients. 6.7.1. Granting access to an NFS share You can provide access to NFS shares by using the IP address of the client Compute instance where you plan to mount the share. Note You can use the following procedure with IPv4 or IPv6 addresses. Procedure Retrieve the IP address of the client Compute instance where you plan to mount the share. Make sure that you select the IP address that corresponds to the network that can reach the shares. In this example, it is the IP address of the StorageNFS network: Replace <share> with the name or ID of the share you are granting access to. Note Access to the share has its own ID, id . Verification Verify that the access configuration was successful: 6.7.2. Granting access to a native CephFS share You can provide access to native CephFS shares by using Ceph client usernames for cephx authentication. The Shared File Systems service (manila) prevents the use of pre-existing Ceph users so you must create unique Ceph client usernames. To mount a share, you need a Ceph client username and an access key. You can retrieve access keys by using the Shared File Systems service API. By default, access keys are visible to all users in a project namespace. You can provide the same user with access to different shares in the project namespace. Users can then access the shares by using the CephFS kernel client on the client machine. Important Use the native CephFS driver with trusted clients only. Procedure Grant users access to a native CephFS share: Replace <share> with either the share name or share ID. Replace <user> with the Ceph client username. Collect the access key for the user: 6.7.3. Granting access to a CIFS share You can grant access to CIFS shares by using the usernames in the Active Directory service. The Shared File Systems service (manila) does not create new users on the Active Directory server. It only validates usernames through the security service, and access rules with invalid usernames result in an error status. If the value of the driver_handles_share_servers (DHSS) parameter is set to true , then you can configure the Active Directory service by adding a security service. If the DHSS parameter is set to false , then your administrator has already configured the Active Directory service and associated it with the storage network. To mount a share, you must specify the user's Active Directory username and password. You cannot obtain this password through the Shared File Systems service. Procedure Grant users access to a CIFS share: Replace <share> with either the share name or the share ID. Replace <user> with the username of the Active Directory user. 6.7.4. Revoking access to a share The owner of a share can revoke access to the share. Complete the following steps to revoke access that was previously granted to a share. Procedure View the access list for the share to retrieve the access ID: Replace <share_01> with either the share name or share ID. Revoke access to the share: Replace <875c6251-c17e-4c45-8516-fe0928004fff> with the access ID of the share. View the access list for the share again to verify the share has been deleted: Note If you have a client with read-write access to the share, you must revoke their access to the share, and then add a read-only rule if you want the client to have read-only access. 6.8. Mounting shares on Compute instances When you grant share access to clients, then the clients can mount and use the shares. Any type of client can access shares as long as there is network connectivity to the client. The steps used to mount an NFS share on a virtual Compute instance are similar to the steps to mount an NFS share on a bare-metal Compute instance. For more information about how to mount shares on OpenShift containers, see Product Documentation for Red Hat OpenShift Container Platform . Note Client packages for the different protocols must be installed on the Compute instance that mounts the shares. For example, for the Shared File Systems service with CephFS through NFS, the NFS client packages must support NFS 4.1. 6.8.1. Listing share export locations Retrieve the export locations of shares so that you can mount a share. Procedure Retrieve the export locations of a share: Replace <share_01> with either the share name or share ID. When multiple export locations exist, choose one for which the value of the preferred metadata field equals True . If no preferred locations exist, you can use any export location. 6.8.2. Mounting NFS, native CephFS, or CIFS shares When you create NFS, native CephFS, or CIFS shares and grant share access to end-user clients, you can then mount the shares on the client to enable access to data, as long as there is network connectivity. Prerequisites To mount NFS shares, the nfs-utils package must be installed on the client machine. To mount native CephFS shares, the ceph-common package must be installed on the client machine. Users access native CephFS shares by using the CephFS kernel client on the client machine. To mount CIFS shares, the cifs-utils package must be installed on the client machine. Procedure Log in to the instance: Mount an NFS share. Refer to the following example for sample syntax: Replace <198.51.100.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01> with the export location of the share. Retrieve the export location as described in Section 6.8.1, "Listing share export locations" . Mount a native CephFS share. Refer to the following example for sample syntax: Replace <192.0.2.125:6789,192.0.2.126:6789,192.0.2.127:6789:/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c> with the export location of the share. Retrieve the export location as described in Section 6.8.1, "Listing share export locations" . Replace <user> with the cephx user who has access to the share. Replace the secret value with the access key that you collected in Section 6.7.2, "Granting access to a native CephFS share" . Mount a CIFS share. Refer to the following example for sample syntax: Replace <user> with the Active Directory user who has access to the share. Replace <password> with the user's Active Directory password. Replace <\\192.0.2.128/share_11265e8a_200c_4e0a_a40f_b7a1117001ed> with the export location of the share. Retrieve the export location as described in Section 6.8.1, "Listing share export locations" . Verification Verify that the mount command succeeded: 6.9. Deleting shares The Shared File Systems service (manila) provides no protection to prevent you from deleting your data. The service does not check whether clients are connected or workloads are running. When you delete a share, you cannot retrieve it. Warning Back up your data before you delete a share. Prerequisites If you created snapshots from a share, you must delete all of the snapshots and replicas before you can delete the share. For more information, see Deleting a snapshot . Procedure Delete a share: Replace <share> with either the share name or the share ID. 6.10. Listing resource limits of the Shared File Systems service You can list the current resource limits for the Shared File Systems service (manila) in a project to plan workloads and prepare for any operations based on resource consumption. Procedure List the resource limits and current resource consumption for the project: 6.11. Troubleshooting operation failures In the event of an error when you create or mount shares, you can run queries from the command line for more information about the error. 6.11.1. Viewing error messages for shares You can use the command line to retrieve user support messages if a share shows an error status. Procedure When you create a share, run the following command to view the status of the share: If the status of your share shows an error, run the share message list command. You can use the --resource-id option to filter to the specific share you want to find out about: Check the User Message column in the share message list command output for a summary of the error. To view more details about the error, run the message show command, followed by the message ID from the message list command output: Replace <id> with the message ID from the message list command output. 6.11.2. Debugging share mounting failures You can use these verification steps to identify the root cause of an error when you mount shares. Procedure Verify the access control list of the share to ensure that the rule that corresponds to your client is correct and has been successfully applied: Replace <share_01> with either the share name or share ID. In a successful rule, the state attribute equals active . If the share type parameter is configured to driver_handles_share_servers=false , copy the hostname or IP address from the export location and ping it to confirm connectivity to the NAS server: Example: If you are using the NFS protocol, you can verify that the NFS server is ready to respond to NFS RPC calls on the correct port: Note The IP address is written in universal address format (uaddr), which adds two extra octets (8.1) to represent the NFS service port, 2049. If these verification steps fail, there might be a network connectivity issue or an issue with the back-end storage for the Shared File Systems service (manila). Collect the log files and contact Red Hat Support. | [
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack share type list",
"openstack share create [--share-type <share_type>] [--share-network <share_network>] [--name <share_name>] <share_protocol> <GB>",
"openstack share network create --name <network_name> --neutron-net-id <25d1e65c-d961-4f22-9476-1190f55f118f> --neutron-subnet-id <8ba20dce-0ca5-4efd-bf1c-608d6bceffe1>",
"openstack share security service create <active_directory> --dns-ip <192.02.12.10> --domain <domain_name.com> --user <administrator> --password <password> --name <AD_service>",
"openstack share network set --new-security-service <AD_service> <network_name>",
"openstack share create --name <nfs_share> --share-type <netapp> --share-network <nfs_network> nfs 10",
"openstack share create --name <cifs_share> --share-type dhss_true --share-network <cifs_network> cifs 20",
"openstack share create --name share-01 nfs 10",
"openstack share create --name share-02 cephfs 15",
"openstack share create --name share-03 cifs 20",
"openstack share list",
"openstack share export location list <share>",
"openstack share export location show <share_id>",
"openstack share show | grep snapshot_support",
"openstack share snapshot create [--name <snapshot_name>] <share>",
"openstack share snapshot list --share <share>",
"openstack share snapshot list",
"openstack share snapshot show <snapshot_id>",
"openstack share create <share_protocol> <size> --snapshot-id <snapshot_id> --name <name>",
"openstack share list",
"openstack share show <name>",
"openstack share snapshot list",
"openstack share snapshot list",
"share snapshot delete <snapshot>",
"share snapshot list",
"openstack security group create no-ingress -f yaml created_at: '2018-09-19T08:19:58Z' description: no-ingress id: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 name: no-ingress project_id: 1e021e8b322a40968484e1af538b8b63 revision_number: 2 rules: 'created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv4'', id=''6c7f643f-3715-4df5-9fef-0850fb6eaaf2'', updated_at=''2018-09-19T08:19:58Z'' created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv6'', id=''a8ca1ac2-fbe5-40e9-ab67-3e55b7a8632a'', updated_at=''2018-09-19T08:19:58Z''",
"openstack port create nfs-port0 --network StorageNFS --security-group no-ingress -f yaml admin_state_up: UP allowed_address_pairs: '' binding_host_id: null binding_profile: null binding_vif_details: null binding_vif_type: null binding_vnic_type: normal created_at: '2018-09-19T08:03:02Z' data_plane_status: null description: '' device_id: '' device_owner: '' dns_assignment: null dns_name: null extra_dhcp_opts: '' fixed_ips: ip_address='198.51.100.160', subnet_id='7bc188ae-aab3-425b-a894-863e4b664192' id: 7a91cbbc-8821-4d20-a24c-99c07178e5f7 ip_address: null mac_address: fa:16:3e:be:41:6f name: nfs-port0 network_id: cb2cbc5f-ea92-4c2d-beb8-d9b10e10efae option_name: null option_value: null port_security_enabled: true project_id: 1e021e8b322a40968484e1af538b8b63 qos_policy_id: null revision_number: 6 security_group_ids: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 status: DOWN subnet_id: null tags: '' trunk_details: null updated_at: '2018-09-19T08:03:03Z'",
"openstack server add port instance0 nfs-port0 openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=198.51.100.4, 10.0.0.53; StorageNFS=198.51.100.160 Status: ACTIVE",
"sudo ip address add fd00:fd00:fd00:7000::c/64 dev eth1",
"sudo ip link set dev eth1 up",
"ping -6 fd00:fd00:fd00:7000::21",
"sudo dnf install -y telnet telnet fd00:fd00:fd00:7000::21 2049",
"openstack share access create <share> <access_type> --access-level <access_level> <client_identifier>",
"openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=198.51.100.4, 10.0.0.53; StorageNFS=198.51.100.160 Status: ACTIVE openstack share access create <share> ip 198.51.100.160",
"+-----------------+---------------------------------------+ | Property | Value | +-----------------+---------------------------------------+ | access_key | None | share_id | db3bedd8-bc82-4100-a65d-53ec51b5cba3 | created_at | 2018-09-17T21:57:42.000000 | updated_at | None | access_type | ip | access_to | 198.51.100.160 | access_level | rw | state | queued_to_apply | id | 875c6251-c17e-4c45-8516-fe0928004fff +-----------------+---------------------------------------+",
"openstack share access list <share> +--------------+-------------+--------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------+-------------+--------------+--------------+--------+ | 875c6251-... | ip | 198.51.100.160 | rw | active | +--------------+------------+--------------+--------------+---------+",
"openstack share access create <share> cephx <user>",
"openstack share access list <share>",
"openstack share access create <share> user <user>",
"openstack share access list <share_01>",
"openstack share access delete <share_01> <875c6251-c17e-4c45-8516-fe0928004fff>",
"openstack share access list <share_01>",
"openstack share export location list <share_01>",
"openstack server ssh demo-instance0 --login user",
"mount -t nfs -v <198.51.100.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01> /mnt",
"mount -t ceph <192.0.2.125:6789,192.0.2.126:6789,192.0.2.127:6789:/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c> -o name=<user>,secret='<AQA8+ANW/<4ZWNRAAOtWJMFPEihBA1unFImJczA==>'",
"mount -t cifs -o user=<user>,pass=<password> <\\\\192.0.2.128/share_11265e8a_200c_4e0a_a40f_b7a1117001ed>",
"df -k",
"openstack share delete <share>",
"openstack share limits show --absolute",
"openstack share list",
"openstack share message list [--resource-id]",
"openstack share message show <id>",
"openstack share access list <share_01>",
"ping -c 1 198.51.100.13 PING 198.51.100.13 (198.51.100.13) 56(84) bytes of data. 64 bytes from 198.51.100.13: icmp_seq=1 ttl=64 time=0.048 ms--- 198.51.100.13 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.851/7.851/7.851/0.000 ms",
"rpcinfo -T tcp -a 198.51.100.13.8.1 100003 4 program 100003 version 4 ready and waiting"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/assembly_manila-performing-operations-with-the-shared-file-systems-service_glance-creating-os-images |
Chapter 149. HL7 DataFormat | Chapter 149. HL7 DataFormat Available as of Camel version 2.0 The HL7 component is used for working with the HL7 MLLP protocol and HL7 v2 messages using the HAPI library . This component supports the following: HL7 MLLP codec for Mina HL7 MLLP codec for Netty4 from Camel 2.15 onwards Type Converter from/to HAPI and String HL7 DataFormat using the HAPI library Even more ease-of-use as it's integrated well with the camel-mina2 component. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hl7</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 149.1. HL7 MLLP protocol HL7 is often used with the HL7 MLLP protocol, which is a text based TCP socket based protocol. This component ships with a Mina and Netty4 Codec that conforms to the MLLP protocol so you can easily expose an HL7 listener accepting HL7 requests over the TCP transport layer. To expose a HL7 listener service, the camel-mina2 or camel-netty4 component is used with the HL7MLLPCodec (mina2) or HL7MLLPNettyDecoder/HL7MLLPNettyEncoder (Netty4). HL7 MLLP codec can be configured as follows: Name Default Value Description startByte 0x0b The start byte spanning the HL7 payload. endByte1 0x1c The first end byte spanning the HL7 payload. endByte2 0x0d The 2nd end byte spanning the HL7 payload. charset JVM Default The encoding (a charset name ) to use for the codec. If not provided, Camel will use the JVM default Charset . produceString true (as of Camel 2.14.1) If true, the codec creates a string using the defined charset. If false, the codec sends a plain byte array into the route, so that the HL7 Data Format can determine the actual charset from the HL7 message content. convertLFtoCR false Will convert \n to \r ( 0x0d , 13 decimal) as HL7 stipulates \r as segment terminators. The HAPI library requires the use of \r . 149.1.1. Exposing an HL7 listener using Mina In the Spring XML file, we configure a mina2 endpoint to listen for HL7 requests using TCP on port 8888 : <endpoint id="hl7MinaListener" uri="mina2:tcp://localhost:8888?sync=true&codec=#hl7codec"/> sync=true indicates that this listener is synchronous and therefore will return a HL7 response to the caller. The HL7 codec is setup with codec=#hl7codec . Note that hl7codec is just a Spring bean ID, so it could be named mygreatcodecforhl7 or whatever. The codec is also set up in the Spring XML file: <bean id="hl7codec" class="org.apache.camel.component.hl7.HL7MLLPCodec"> <property name="charset" value="iso-8859-1"/> </bean> The endpoint hl7MinaLlistener can then be used in a route as a consumer, as this Java DSL example illustrates: from("hl7MinaListener") .bean("patientLookupService"); This is a very simple route that will listen for HL7 and route it to a service named patientLookupService . This is also Spring bean ID, configured in the Spring XML as: <bean id="patientLookupService" class="com.mycompany.healthcare.service.PatientLookupService"/> The business logic can be implemented in POJO classes that do not depend on Camel, as shown here: import ca.uhn.hl7v2.HL7Exception; import ca.uhn.hl7v2.model.Message; import ca.uhn.hl7v2.model.v24.segment.QRD; public class PatientLookupService { public Message lookupPatient(Message input) throws HL7Exception { QRD qrd = (QRD)input.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // find patient data based on the patient id and create a HL7 model object with the response Message response = ... create and set response data return response } 149.1.2. Exposing an HL7 listener using Netty (available from Camel 2.15 onwards) In the Spring XML file, we configure a netty4 endpoint to listen for HL7 requests using TCP on port 8888 : <endpoint id="hl7NettyListener" uri="netty4:tcp://localhost:8888?sync=true&encoder=#hl7encoder&decoder=#hl7decoder"/> sync=true indicates that this listener is synchronous and therefore will return a HL7 response to the caller. The HL7 codec is setup with encoder=#hl7encoder*and*decoder=#hl7decoder . Note that hl7encoder and hl7decoder are just bean IDs, so they could be named differently. The beans can be set in the Spring XML file: <bean id="hl7decoder" class="org.apache.camel.component.hl7.HL7MLLPNettyDecoderFactory"/> <bean id="hl7encoder" class="org.apache.camel.component.hl7.HL7MLLPNettyEncoderFactory"/> The endpoint hl7NettyListener can then be used in a route as a consumer, as this Java DSL example illustrates: from("hl7NettyListener") .bean("patientLookupService"); 149.2. HL7 Model using java.lang.String or byte[] The HL7 MLLP codec uses plain String as its data format. Camel uses its Type Converter to convert to/from strings to the HAPI HL7 model objects, but you can use the plain String objects if you prefer, for instance if you wish to parse the data yourself. As of Camel 2.14.1 you can also let both the Mina and Netty codecs use a plain byte[] as its data format by setting the produceString property to false. The Type Converter is also capable of converting the byte[] to/from HAPI HL7 model objects. 149.3. HL7v2 Model using HAPI The HL7v2 model uses Java objects from the HAPI library. Using this library, you can encode and decode from the EDI format (ER7) that is mostly used with HL7v2. The sample below is a request to lookup a patient with the patient ID 0101701234 . MSH|^~\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200||QRY^A19|1234|P|2.4 QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM|| Using the HL7 model you can work with a ca.uhn.hl7v2.model.Message object, e.g. to retrieve a patient ID: Message msg = exchange.getIn().getBody(Message.class); QRD qrd = (QRD)msg.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // 0101701234 This is powerful when combined with the HL7 listener, because you don't have to work with byte[] , String or any other simple object formats. You can just use the HAPI HL7v2 model objects. If you know the message type in advance, you can be more type-safe: QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class); String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue(); 149.4. HL7 DataFormat The HL7 component ships with a HL7 data format that can be used to marshal or unmarshal HL7 model objects. The HL7 dataformat supports 2 options, which are listed below. Name Default Java Type Description validate true Boolean Whether to validate the HL7 message Is by default true. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 149.5. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.dataformat.hl7.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.hl7.enabled Enable hl7 dataformat true Boolean camel.dataformat.hl7.validate Whether to validate the HL7 message Is by default true. true Boolean camel.language.terser.enabled Enable terser language true Boolean camel.language.terser.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks true Boolean ND marshal = from Message to byte stream (can be used when responding using the HL7 MLLP codec) unmarshal = from byte stream to Message (can be used when receiving streamed data from the HL7 MLLP To use the data format, simply instantiate an instance and invoke the marshal or unmarshal operation in the route builder: DataFormat hl7 = new HL7DataFormat(); from("direct:hl7in") .marshal(hl7) .to("jms:queue:hl7out"); In the sample above, the HL7 is marshalled from a HAPI Message object to a byte stream and put on a JMS queue. The example is the opposite: DataFormat hl7 = new HL7DataFormat(); from("jms:queue:hl7out") .unmarshal(hl7) .to("patientLookupService"); Here we unmarshal the byte stream into a HAPI Message object that is passed to our patient lookup service. 149.5.1. Serializable messages As of HAPI 2.0 (used by Camel 2.11 ), the HL7v2 model classes are fully serializable. So you can put HL7v2 messages directly into a JMS queue (i.e. without calling marshal() and read them again directly from the queue (i.e. without calling unmarshal() . 149.5.2. Segment separators As of Camel 2.11 , unmarshal does not automatically fix segment separators anymore by converting \n to \r . If you need this conversion, org.apache.camel.component.hl7.HL7#convertLFToCR provides a handy Expression for this purpose. 149.5.3. Charset As of Camel 2.14.1 , both marshal and unmarshal evaluate the charset provided in the field MSH-18 . If this field is empty, by default the charset contained in the corresponding Camel charset property/header is assumed. You can even change this default behavior by overriding the guessCharsetName method when inheriting from the HL7DataFormat class. There is a shorthand syntax in Camel for well-known data formats that are commonly used. Then you don't need to create an instance of the HL7DataFormat object: from("direct:hl7in") .marshal().hl7() .to("jms:queue:hl7out"); from("jms:queue:hl7out") .unmarshal().hl7() .to("patientLookupService"); 149.6. Message Headers The unmarshal operation adds these fields from the MSH segment as headers on the Camel message: Key MSH field Example CamelHL7SendingApplication MSH-3 MYSERVER CamelHL7SendingFacility MSH-4 MYSERVERAPP CamelHL7ReceivingApplication MSH-5 MYCLIENT CamelHL7ReceivingFacility MSH-6 MYCLIENTAPP CamelHL7Timestamp MSH-7 20071231235900 CamelHL7Security MSH-8 null CamelHL7MessageType MSH-9-1 ADT CamelHL7TriggerEvent MSH-9-2 A01 CamelHL7MessageControl MSH-10 1234 CamelHL7ProcessingId MSH-11 P CamelHL7VersionId MSH-12 2.4 `CamelHL7Context `` ` (Camel 2.14) contains the HapiContext that was used to parse the message CamelHL7Charset MSH-18 (Camel 2.14.1) UNICODE UTF-8 All headers except CamelHL7Context `are `String types. If a header value is missing, its value is null . 149.7. Options The HL7 Data Format supports the following options: Option Default Description validate true Whether the HAPI Parser should validate the message using the default validation rules. It is recommended to use the parser or hapiContext option and initialize it with the desired HAPI ValidationContext parser ca.uhn.hl7v2.parser.GenericParser Custom parser to be used. Must be of type ca.uhn.hl7v2.parser.Parser . Note that GenericParser also allows to parse XML-encoded HL7v2 messages hapiContext ca.uhn.hl7v2.DefaultHapiContext Camel 2.14: Custom HAPI context that can define a custom parser, custom ValidationContext etc. This gives you full control over the HL7 parsing and rendering process. 149.8. Dependencies To use HL7 in your Camel routes you'll need to add a dependency on camel-hl7 listed above, which implements this data format. The HAPI library is split into a base library and several structure libraries, one for each HL7v2 message version: v2.1 structures library v2.2 structures library v2.3 structures library v2.3.1 structures library v2.4 structures library v2.5 structures library v2.5.1 structures library v2.6 structures library By default camel-hl7 only references the HAPI base library . Applications are responsible for including structure libraries themselves. For example, if an application works with HL7v2 message versions 2.4 and 2.5 then the following dependencies must be added: <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v24</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v25</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> Alternatively, an OSGi bundle containing the base library, all structures libraries and required dependencies (on the bundle classpath) can be downloaded from the central Maven repository . <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-osgi-base</artifactId> <version>2.2</version> </dependency> 149.9. Terser language HAPI provides a Terser class that provides access to fields using a commonly used terse location specification syntax. The Terser language allows to use this syntax to extract values from messages and to use them as expressions and predicates for filtering, content-based routing etc. Sample: import static org.apache.camel.component.hl7.HL7.terser; // extract patient ID from field QRD-8 in the QRY_A19 message above and put into message header from("direct:test1") .setHeader("PATIENT_ID",terser("QRD-8(0)-1")) .to("mock:test1"); // continue processing if extracted field equals a message header from("direct:test2") .filter(terser("QRD-8(0)-1").isEqualTo(header("PATIENT_ID")) .to("mock:test2"); 149.10. HL7 Validation predicate Often it is preferable to first parse a HL7v2 message and in a separate step validate it against a HAPI ValidationContext . Sample: import static org.apache.camel.component.hl7.HL7.messageConformsTo; import ca.uhn.hl7v2.validation.impl.DefaultValidation; // Use standard or define your own validation rules ValidationContext defaultContext = new DefaultValidation(); // Throws PredicateValidationException if message does not validate from("direct:test1") .validate(messageConformsTo(defaultContext)) .to("mock:test1"); 149.11. HL7 Validation predicate using the HapiContext (Camel 2.14) The HAPI Context is always configured with a ValidationContext (or a ValidationRuleBuilder ), so you can access the validation rules indirectly. Furthermore, when unmarshalling the HL7DataFormat forwards the configured HAPI context in the CamelHL7Context header, and the validation rules of this context can be easily reused: import static org.apache.camel.component.hl7.HL7.messageConformsTo; import static org.apache.camel.component.hl7.HL7.messageConforms HapiContext hapiContext = new DefaultHapiContext(); hapiContext.getParserConfiguration().setValidating(false); // don't validate during parsing // customize HapiContext some more ... e.g. enforce that PID-8 in ADT_A01 messages of version 2.4 is not empty ValidationRuleBuilder builder = new ValidationRuleBuilder() { @Override protected void configure() { forVersion(Version.V24) .message("ADT", "A01") .terser("PID-8", not(empty())); } }; hapiContext.setValidationRuleBuilder(builder); HL7DataFormat hl7 = new HL7DataFormat(); hl7.setHapiContext(hapiContext); from("direct:test1") .unmarshal(hl7) // uses the GenericParser returned from the HapiContext .validate(messageConforms()) // uses the validation rules returned from the HapiContext // equivalent with .validate(messageConformsTo(hapiContext)) // route continues from here 149.12. HL7 Acknowledgement expression A common task in HL7v2 processing is to generate an acknowledgement message as response to an incoming HL7v2 message, e.g. based on a validation result. The ack expression lets us accomplish this very elegantly: import static org.apache.camel.component.hl7.HL7.messageConformsTo; import static org.apache.camel.component.hl7.HL7.ack; import ca.uhn.hl7v2.validation.impl.DefaultValidation; // Use standard or define your own validation rules ValidationContext defaultContext = new DefaultValidation(); from("direct:test1") .onException(Exception.class) .handled(true) .transform(ack()) // auto-generates negative ack because of exception in Exchange .end() .validate(messageConformsTo(defaultContext)) // do something meaningful here // acknowledgement .transform(ack()) | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hl7</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"<endpoint id=\"hl7MinaListener\" uri=\"mina2:tcp://localhost:8888?sync=true&codec=#hl7codec\"/>",
"<bean id=\"hl7codec\" class=\"org.apache.camel.component.hl7.HL7MLLPCodec\"> <property name=\"charset\" value=\"iso-8859-1\"/> </bean>",
"from(\"hl7MinaListener\") .bean(\"patientLookupService\");",
"<bean id=\"patientLookupService\" class=\"com.mycompany.healthcare.service.PatientLookupService\"/>",
"import ca.uhn.hl7v2.HL7Exception; import ca.uhn.hl7v2.model.Message; import ca.uhn.hl7v2.model.v24.segment.QRD; public class PatientLookupService { public Message lookupPatient(Message input) throws HL7Exception { QRD qrd = (QRD)input.get(\"QRD\"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // find patient data based on the patient id and create a HL7 model object with the response Message response = ... create and set response data return response }",
"<endpoint id=\"hl7NettyListener\" uri=\"netty4:tcp://localhost:8888?sync=true&encoder=#hl7encoder&decoder=#hl7decoder\"/>",
"<bean id=\"hl7decoder\" class=\"org.apache.camel.component.hl7.HL7MLLPNettyDecoderFactory\"/> <bean id=\"hl7encoder\" class=\"org.apache.camel.component.hl7.HL7MLLPNettyEncoderFactory\"/>",
"from(\"hl7NettyListener\") .bean(\"patientLookupService\");",
"MSH|^~\\\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200||QRY^A19|1234|P|2.4 QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM||",
"Message msg = exchange.getIn().getBody(Message.class); QRD qrd = (QRD)msg.get(\"QRD\"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // 0101701234",
"QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class); String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue();",
"DataFormat hl7 = new HL7DataFormat(); from(\"direct:hl7in\") .marshal(hl7) .to(\"jms:queue:hl7out\");",
"DataFormat hl7 = new HL7DataFormat(); from(\"jms:queue:hl7out\") .unmarshal(hl7) .to(\"patientLookupService\");",
"from(\"direct:hl7in\") .marshal().hl7() .to(\"jms:queue:hl7out\"); from(\"jms:queue:hl7out\") .unmarshal().hl7() .to(\"patientLookupService\");",
"<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v24</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v25</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency>",
"<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-osgi-base</artifactId> <version>2.2</version> </dependency>",
"import static org.apache.camel.component.hl7.HL7.terser; // extract patient ID from field QRD-8 in the QRY_A19 message above and put into message header from(\"direct:test1\") .setHeader(\"PATIENT_ID\",terser(\"QRD-8(0)-1\")) .to(\"mock:test1\"); // continue processing if extracted field equals a message header from(\"direct:test2\") .filter(terser(\"QRD-8(0)-1\").isEqualTo(header(\"PATIENT_ID\")) .to(\"mock:test2\");",
"import static org.apache.camel.component.hl7.HL7.messageConformsTo; import ca.uhn.hl7v2.validation.impl.DefaultValidation; // Use standard or define your own validation rules ValidationContext defaultContext = new DefaultValidation(); // Throws PredicateValidationException if message does not validate from(\"direct:test1\") .validate(messageConformsTo(defaultContext)) .to(\"mock:test1\");",
"import static org.apache.camel.component.hl7.HL7.messageConformsTo; import static org.apache.camel.component.hl7.HL7.messageConforms HapiContext hapiContext = new DefaultHapiContext(); hapiContext.getParserConfiguration().setValidating(false); // don't validate during parsing // customize HapiContext some more ... e.g. enforce that PID-8 in ADT_A01 messages of version 2.4 is not empty ValidationRuleBuilder builder = new ValidationRuleBuilder() { @Override protected void configure() { forVersion(Version.V24) .message(\"ADT\", \"A01\") .terser(\"PID-8\", not(empty())); } }; hapiContext.setValidationRuleBuilder(builder); HL7DataFormat hl7 = new HL7DataFormat(); hl7.setHapiContext(hapiContext); from(\"direct:test1\") .unmarshal(hl7) // uses the GenericParser returned from the HapiContext .validate(messageConforms()) // uses the validation rules returned from the HapiContext // equivalent with .validate(messageConformsTo(hapiContext)) // route continues from here",
"import static org.apache.camel.component.hl7.HL7.messageConformsTo; import static org.apache.camel.component.hl7.HL7.ack; import ca.uhn.hl7v2.validation.impl.DefaultValidation; // Use standard or define your own validation rules ValidationContext defaultContext = new DefaultValidation(); from(\"direct:test1\") .onException(Exception.class) .handled(true) .transform(ack()) // auto-generates negative ack because of exception in Exchange .end() .validate(messageConformsTo(defaultContext)) // do something meaningful here // acknowledgement .transform(ack())"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hl7-dataformat |
Chapter 3. FIPS automation in Red Hat build of OpenJDK 17 | Chapter 3. FIPS automation in Red Hat build of OpenJDK 17 This chapter describes how the FIPS automation is implemented in Red Hat build of OpenJDK 17 and how FIPS automation might impact your applications. 3.1. Security providers When FIPS mode is enabled, Red Hat build of OpenJDK 17 replaces the installed security providers with a constrained list. Some security services and algorithms might be dropped, so that only a FIPS-certified module performs cryptographic operations. The following list describes installed security providers, services, algorithms and enabled configurations: SunPKCS11-NSS-FIPS Initialized with the NSS software token, which is the service provider's PKCS#11 back end, in accordance with the configuration found at USDJRE_HOME/conf/security/nss.fips.cfg : name = NSS-FIPS nssLibraryDirectory = /usr/lib64 nssSecmodDirectory = USD{fips.nssdb.path} nssDbMode = readWrite nssModule = fips attributes(*,CKO_SECRET_KEY,CKK_GENERIC_SECRET)={ CKA_SIGN=true } Note Changes to this configuration are discouraged. All cryptographic services are enabled. These include AlgorithmParameters , Cipher , KeyAgreement , KeyFactory , KeyGenerator , KeyPairGenerator , KeyStore , Mac , MessageDigest , SecretKeyFactory , SecureRandom , and Signature . SUN Only X.509 certificate-related ( CertificateFactory , CertPathBuilder , CertPathValidator , CertStore ), AlgorithmParameterGenerator , AlgorithmParameters , and KeyStore ( JKS , PKCS12 ) services are enabled. SunEC Only AlgorithmParameters and KeyFactory services are enabled. SunJSSE Only TLS-related services ( KeyManagerFactory , SSLContext , TrustManagerFactory ) and KeyStore ( PKCS12 ) are enabled. SunJCE Only AlgorithmParameterGenerator , AlgorithmParameters , KeyFactory , and SecretKeyFactory (except BKDF2 algorithms) services are enabled. SunRsaSign Only AlgorithmParameters and KeyFactory services are enabled. XMLDSig All services are enabled. These include TransformService , KeyInfoFactory , and XMLSignatureFactory . 3.2. Crypto-policies In FIPS mode, Red Hat build of OpenJDK 17 takes the list of disabled cryptographic algorithms and other configurations from the global FIPS crypto-policy in RHEL. You can find these values at /etc/crypto-policies/back-ends/java.config . You can use the update-crypto-policies tool from RHEL to consistently manage crypto-policies. Note A crypto-policies approved algorithm might not be usable when Red Hat build of OpenJDK is configured in FIPS mode. This occurs when a FIPS-certified implementation is not available in the NSS software token or when it is not supported in the SunPKCS11 security provider. 3.3. Trust Anchor certificates In FIPS mode, Red Hat build of OpenJDK 17 uses the global Trust Anchor certificates repository by default. This behavior is equivalent to non-FIPS mode. This repository is located at /etc/pki/java/cacerts . Use the update-ca-trust tool from RHEL to consistently manage certificates. Optionally, you can store Trust Anchor certificates in your own PKCS12 and PKCS11 keystores, and use them for TLS communication. For more information, see the TrustManagerFactory::init documentation. When the javax.net.ssl.trustStoreType system property is not set and FIPS mode is enabled, Red Hat build of OpenJDK 17 automatically sets this system property to the value of the keystore.type security property. This behavior is equivalent to non-FIPS mode. 3.4. Keystores In FIPS mode, Red Hat build of OpenJDK 17 enables the use of the PKCS12 and PKCS11 keystore types. PKCS12 is used by default. You can change the default keystore type by using the fips.keystore.type security property. An application can also select which keystore type to use when invoking KeyStore.getInstance(<type>) . When opening a PKCS11 keystore, Red Hat build of OpenJDK 17 uses the SQLite NSS DB located at /etc/pki/nssdb . This NSS DB might be unsuitable to store keys. You can specify a different database by setting a value for the fips.nssdb.path property. For more information and security considerations, see FIPS settings in Red Hat build of OpenJDK 17 . When you set the fips.keystore.type security property to PKCS11 and FIPS mode is enabled, Red Hat build of OpenJDK 17 automatically assigns the javax.net.ssl.keyStore system property to a value of NONE . This behavior facilitates the use of PKCS#11 keystores by saving a manual configuration step. For more information, see JDK-8238264 . Revised on 2024-11-25 10:51:45 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel_with_fips/openjdk-default-fips-configuration |
probe::signal.check_ignored | probe::signal.check_ignored Name probe::signal.check_ignored - Checking to see signal is ignored Synopsis Values sig_name A string representation of the signal sig The number of the signal pid_name Name of the process receiving the signal sig_pid The PID of the process receiving the signal | [
"signal.check_ignored"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-check-ignored |
Chapter 18. Using the Stream Control Transmission Protocol (SCTP) | Chapter 18. Using the Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster. 18.1. Support for SCTP on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 18.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 18.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 18.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp | [
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/using-sctp |
4.2. Setting up Google Compute Engine | 4.2. Setting up Google Compute Engine To set up Google Compute engine, perform the following steps: 4.2.1. SSH Keys SSH keys must be generated and registered with the Google Compute Engine project to connect via standard SSH. You can SSH directly to the instance public IP addresses after it is generated. Generate an SSH key pair for use with Google Compute Engine using the following command: In the Google Developers Console, click Computer > Compute Engine > Metadata > SSH Keys > Edit . Enter the output generated from ~/.ssh/google_compute_engine.pub file, and click Save . To enable SSH agent to use this identity file for each new local console session, run the following command on the console: Adding the below line to your ~/.ssh/config file helps you automate this command. You can now connect via standard SSH to the new VM instances created in your Google Compute Engine project. The gcloud compute config-ssh command from the Google Cloud SDK populates your ~/.ssh/config file with aliases that allows simple SSH connections by instance name. 4.2.2. Setting up Quota The minimum persistent disk quotas listed below are required for this deployment. It may be necessary to request a quota increase from Google. Local region (see US-CENTRAL1 illustration in Section 4.1.3, "Primary Storage Pool Configuration" ) Total persistent disk reserved (GB) >= 206,000 CPUs >= 100 Remote region (see EUROPE-WEST1 illustration in Section 4.1.4, "Secondary Storage Pool Configuration" ) Total persistent disk reserved (GB) >= 103,000 CPUs >=40 | [
"ssh-keygen -t rsa -f ~/.ssh/google_compute_engine",
"ssh-add ~/.ssh/google_compute_engine",
"IdentityFile ~/.ssh/google_compute_engine",
"ssh -i ~/.ssh/google_compute_engine <username>@<instance_external_ip>"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-setting_up_google_compute_engine |
Chapter 90. Netty | Chapter 90. Netty Both producer and consumer are supported The Netty component in Camel is a socket communication component, based on the Netty project version 4. Netty is a NIO client server framework which enables quick and easy development of networkServerInitializerFactory applications such as protocol servers and clients. Netty greatly simplifies and streamlines network programming such as TCP and UDP socket server. This camel component supports both producer and consumer endpoints. The Netty component has several options and allows fine-grained control of a number of TCP/UDP communication parameters (buffer sizes, keepAlives, tcpNoDelay, etc) and facilitates both In-Only and In-Out communication on a Camel route. 90.1. Dependencies When using netty with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency> 90.2. URI format The URI scheme for a netty component is as follows This component supports producer and consumer endpoints for both TCP and UDP. 90.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 90.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 90.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 90.4. Component Options The Netty component supports 73 options, which are listed below. Name Description Default Type configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean executorService (consumer (advanced)) To use the given EventExecutorGroup. EventExecutorGroup maximumPoolSize (consumer (advanced)) Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 90.5. Endpoint Options The Netty endpoint is configured using URI syntax: with the following path and query parameters: 90.5.1. Path Parameters (3 parameters) Name Description Default Type protocol (common) Required The protocol to use which can be tcp or udp. Enum values: tcp udp String host (common) Required The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to. String port (common) Required The host port number. int 90.5.2. Query Parameters (71 parameters) Name Description Default Type disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 90.6. Registry based Options Codec Handlers and SSL Keystores can be enlisted in the Registry, such as in the Spring XML file. The values that could be passed in, are the following: Name Description passphrase password setting to use in order to encrypt/decrypt payloads sent using SSH keyStoreFormat keystore format to be used for payload encryption. Defaults to "JKS" if not set securityProvider Security provider to be used for payload encryption. Defaults to "SunX509" if not set. keyStoreFile deprecated: Client side certificate keystore to be used for encryption trustStoreFile deprecated: Server side certificate keystore to be used for encryption keyStoreResource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. trustStoreResource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. sslHandler Reference to a class that could be used to return an SSL Handler encoder A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. Must override io.netty.channel.ChannelInboundHandlerAdapter. encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. decoder A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. Must override io.netty.channel.ChannelOutboundHandlerAdapter. decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. Note Read below about using non shareable encoders/decoders. 90.6.1. Using non shareable encoders or decoders If your encoders or decoders are not shareable (e.g. they don't have the @Shareable class annotation), then your encoder/decoder must implement the org.apache.camel.component.netty.ChannelHandlerFactory interface, and return a new instance in the newChannelHandler method. This is to ensure the encoder/decoder can safely be used. If this is not the case, then the Netty component will log a WARN when an endpoint is created. The Netty component offers a org.apache.camel.component.netty.ChannelHandlerFactories factory class, that has a number of commonly used methods. 90.7. Sending Messages to/from a Netty endpoint 90.7.1. Netty Producer In Producer mode, the component provides the ability to send payloads to a socket endpoint using either TCP or UDP protocols (with optional SSL support). The producer mode supports both one-way and request-response based operations. 90.7.2. Netty Consumer In Consumer mode, the component provides the ability to: listen on a specified socket using either TCP or UDP protocols (with optional SSL support), receive requests on the socket using text/xml, binary and serialized object based payloads and send them along on a route as message exchanges. The consumer mode supports both one-way and request-response based operations. 90.8. Examples 90.8.1. A UDP Netty endpoint using Request-Reply and serialized object payload Note that Object serialization is not allowed by default, and so a decoder must be configured. @BindToRegistry("decoder") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody("Message received); } } } }; 90.8.2. A TCP based Netty consumer endpoint using One-way communication RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:tcp://0.0.0.0:5150") .to("mock:result"); } }; 90.8.3. An SSL/TCP based Netty consumer endpoint using Request-Reply communication Using the JSSE Configuration Utility The Netty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Netty component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent("netty", NettyComponent.class); nettyComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters"/> ... Using Basic SSL/TLS configuration on the Jetty Component Registry registry = context.getRegistry(); registry.bind("password", "changeit"); registry.bind("ksf", new File("src/test/resources/keystore.jks")); registry.bind("tsf", new File("src/test/resources/keystore.jks")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); Getting access to SSLSession and the client certificate You can get access to the javax.net.ssl.SSLSession if you eg need to get details about the client certificate. When ssl=true then the Netty component will store the SSLSession as a header on the Camel Message as shown below: SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN(); Remember to set needClientAuth=true to authenticate the client, otherwise SSLSession cannot access information about the client certificate, and you may get an exception javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated . You may also get this exception if the client certificate is expired or not valid etc. Note The option sslClientCertHeaders can be set to true which then enriches the Camel Message with headers having details about the client certificate. For example the subject name is readily available in the header CamelNettySSLClientCertSubjectName . 90.8.4. Using Multiple Codecs In certain cases it may be necessary to add chains of encoders and decoders to the netty pipeline. To add multpile codecs to a camel netty endpoint the 'encoders' and 'decoders' uri parameters should be used. Like the 'encoder' and 'decoder' parameters they are used to supply references (lists of ChannelUpstreamHandlers and ChannelDownstreamHandlers) that should be added to the pipeline. Note that if encoders is specified then the encoder param will be ignored, similarly for decoders and the decoder param. Note Read further above about using non shareable encoders/decoders. The lists of codecs need to be added to the Camel's registry so they can be resolved when the endpoint is created. ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind("length-decoder", lengthDecoder); registry.bind("string-decoder", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind("length-encoder", lengthEncoder); registry.bind("string-encoder", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind("encoders", encoders); registry.bind("decoders", decoders); Spring's native collections support can be used to specify the codec lists in an application context <util:list id="decoders" list-class="java.util.LinkedList"> <bean class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringDecoder"/> </util:list> <util:list id="encoders" list-class="java.util.LinkedList"> <bean class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringEncoder"/> </util:list> <bean id="length-encoder" class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean id="string-encoder" class="io.netty.handler.codec.string.StringEncoder"/> <bean id="length-decoder" class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean id="string-decoder" class="io.netty.handler.codec.string.StringDecoder"/> The bean names can then be used in netty endpoint definitions either as a comma separated list or contained in a List e.g. from("direct:multiple-codec").to("netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false"); from("netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec"); or via XML. <camelContext id="multiple-netty-codecs-context" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:multiple-codec"/> <to uri="netty:tcp://0.0.0.0:5150?encoders=#encoders&sync=false"/> </route> <route> <from uri="netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&sync=false"/> <to uri="mock:multiple-codec"/> </route> </camelContext> 90.9. Closing Channel When Complete When acting as a server you sometimes want to close the channel when, for example, a client conversion is finished. You can do this by simply setting the endpoint option disconnect=true . However you can also instruct Camel on a per message basis as follows. To instruct Camel to close the channel, you should add a header with the key CamelNettyCloseChannelWhenComplete set to a boolean true value. For instance, the example below will close the channel after it has written the bye message back to the client: from("netty:tcp://0.0.0.0:8080").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody("Bye " + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } }); Adding custom channel pipeline factories to gain complete control over a created pipeline. 90.10. Custom pipeline Custom channel pipelines provide complete control to the user over the handler/interceptor chain by inserting custom handler(s), encoder(s) & decoder(s) without having to specify them in the Netty Endpoint URL in a very simple way. In order to add a custom pipeline, a custom channel pipeline factory must be created and registered with the context via the context registry (Registry, or the camel-spring ApplicationContextRegistry etc). A custom pipeline factory must be constructed as follows A Producer linked channel pipeline factory must extend the abstract class ClientPipelineFactory . A Consumer linked channel pipeline factory must extend the abstract class ServerInitializerFactory . The classes should override the initChannel() method in order to insert custom handler(s), encoder(s) and decoder(s). Not overriding the initChannel() method creates a pipeline with no handlers, encoders or decoders wired to the pipeline. The example below shows how ServerInitializerFactory factory may be created 90.10.1. Using custom pipeline factory public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); } } The custom channel pipeline factory can then be added to the registry and instantiated/utilized on a camel route in the following way Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind("spf", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); 90.11. Reusing Netty boss and worker thread pools Netty has two kind of thread pools: boss and worker. By default each Netty consumer and producer has their private thread pools. If you want to reuse these thread pools among multiple consumers or producers then the thread pools must be created and enlisted in the Registry. For example using Spring XML we can create a shared worker thread pool using the NettyWorkerPoolBuilder with 2 worker threads as shown below: <!-- use the worker pool builder to help create the shared thread pool --> <bean id="poolBuilder" class="org.apache.camel.component.netty.NettyWorkerPoolBuilder"> <property name="workerCount" value="2"/> </bean> <!-- the shared worker thread pool --> <bean id="sharedPool" class="org.jboss.netty.channel.socket.nio.WorkerPool" factory-bean="poolBuilder" factory-method="build" destroy-method="shutdown"> </bean> Note For boss thread pool there is a org.apache.camel.component.netty.NettyServerBossPoolBuilder builder for Netty consumers, and a org.apache.camel.component.netty.NettyClientBossPoolBuilder for the Netty producers. Then in the Camel routes we can refer to this worker pools by configuring the workerPool option in the URI as shown below: <route> <from uri="netty:tcp://0.0.0.0:5021?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route> And if we have another route we can refer to the shared worker pool: <route> <from uri="netty:tcp://0.0.0.0:5022?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route> and so forth. 90.12. Multiplexing concurrent messages over a single connection with request/reply When using Netty for request/reply messaging via the netty producer then by default each message is sent via a non-shared connection (pooled). This ensures that replies are automatic being able to map to the correct request thread for further routing in Camel. In other words correlation between request/reply messages happens out-of-the-box because the replies comes back on the same connection that was used for sending the request; and this connection is not shared with others. When the response comes back, the connection is returned back to the connection pool, where it can be reused by others. However if you want to multiplex concurrent request/responses on a single shared connection, then you need to turn off the connection pooling by setting producerPoolEnabled=false . Now this means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager=#myManager option. Note We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. You can find an example with the Apache Camel source code in the examples directory under the camel-example-netty-custom-correlation directory. 90.13. Spring Boot Auto-Configuration The component supports 74 options, which are listed below. Name Description Default Type camel.component.netty.allow-default-codec The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true Boolean camel.component.netty.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.auto-append-delimiter Whether or not to auto append missing end delimiter when sending using the textline codec. true Boolean camel.component.netty.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.netty.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 Integer camel.component.netty.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup camel.component.netty.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.netty.broadcast Setting to choose Multicast over UDP. false Boolean camel.component.netty.channel-group To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. ChannelGroup camel.component.netty.client-initializer-factory To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. ClientInitializerFactory camel.component.netty.client-mode If the clientMode is true, netty consumer will connect the address as a TCP client. false Boolean camel.component.netty.configuration To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. NettyConfiguration camel.component.netty.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. NettyCamelStateCorrelationManager camel.component.netty.decoder-max-line-length The max line length to use for the textline codec. 1024 Integer camel.component.netty.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.delimiter The delimiter to use for the textline codec. Possible values are LINE and NULL. TextLineDelimiter camel.component.netty.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty.enabled Whether to enable auto configuration of the netty component. This is enabled by default. Boolean camel.component.netty.enabled-protocols Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String camel.component.netty.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.encoding The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String camel.component.netty.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. EventExecutorGroup camel.component.netty.hostname-verification To enable/disable hostname verification on SSLEngine. false Boolean camel.component.netty.keep-alive Setting to ensure socket is not closed due to inactivity. true Boolean camel.component.netty.key-store-file Client side certificate keystore to be used for encryption. File camel.component.netty.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set. String camel.component.netty.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.netty.maximum-pool-size Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. Integer camel.component.netty.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false Boolean camel.component.netty.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. NettyServerBootstrapFactory camel.component.netty.network-interface When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String camel.component.netty.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH. String camel.component.netty.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty.producer-pool-max-total Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty.reconnect Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true Boolean camel.component.netty.reconnect-interval Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 Integer camel.component.netty.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty.reuse-address Setting to facilitate socket multiplexing. true Boolean camel.component.netty.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty.server-initializer-factory To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. ServerInitializerFactory camel.component.netty.ssl Setting to specify whether SSL encryption is applied to this endpoint. false Boolean camel.component.netty.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.netty.ssl-handler Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. SslHandler camel.component.netty.sync Setting to set endpoint as one-way or request-response. true Boolean camel.component.netty.tcp-no-delay Setting to improve TCP protocol performance. true Boolean camel.component.netty.textline Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false Boolean camel.component.netty.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.trust-store-file Server side certificate keystore to be used for encryption. File camel.component.netty.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.udp-byte-array-codec For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false Boolean camel.component.netty.udp-connectionless-sending This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false Boolean camel.component.netty.use-byte-buf If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false Boolean camel.component.netty.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. Integer camel.component.netty.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency>",
"netty:tcp://0.0.0.0:99999[?options] netty:udp://remotehost:99999/[?options]",
"netty:protocol://host:port",
"@BindToRegistry(\"decoder\") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder\") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody(\"Message received); } } } };",
"RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:tcp://0.0.0.0:5150\") .to(\"mock:result\"); } };",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent(\"netty\", NettyComponent.class); nettyComponent.setSslContextParameters(scp);",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters\"/>",
"Registry registry = context.getRegistry(); registry.bind(\"password\", \"changeit\"); registry.bind(\"ksf\", new File(\"src/test/resources/keystore.jks\")); registry.bind(\"tsf\", new File(\"src/test/resources/keystore.jks\")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password\" + \"&keyStoreFile=#ksf&trustStoreFile=#tsf\"; String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });",
"SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN();",
"ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind(\"length-decoder\", lengthDecoder); registry.bind(\"string-decoder\", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind(\"length-encoder\", lengthEncoder); registry.bind(\"string-encoder\", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind(\"encoders\", encoders); registry.bind(\"decoders\", decoders);",
"<util:list id=\"decoders\" list-class=\"java.util.LinkedList\"> <bean class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringDecoder\"/> </util:list> <util:list id=\"encoders\" list-class=\"java.util.LinkedList\"> <bean class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringEncoder\"/> </util:list> <bean id=\"length-encoder\" class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-encoder\" class=\"io.netty.handler.codec.string.StringEncoder\"/> <bean id=\"length-decoder\" class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-decoder\" class=\"io.netty.handler.codec.string.StringDecoder\"/>",
"from(\"direct:multiple-codec\").to(\"netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false\"); from(\"netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false\").to(\"mock:multiple-codec\");",
"<camelContext id=\"multiple-netty-codecs-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:multiple-codec\"/> <to uri=\"netty:tcp://0.0.0.0:5150?encoders=#encoders&sync=false\"/> </route> <route> <from uri=\"netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&sync=false\"/> <to uri=\"mock:multiple-codec\"/> </route> </camelContext>",
"from(\"netty:tcp://0.0.0.0:8080\").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody(\"Bye \" + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } });",
"public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast(\"encoder-SD\", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast(\"decoder-DELIM\", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast(\"decoder-SD\", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast(\"handler\", new ServerChannelHandler(consumer)); } }",
"Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind(\"spf\", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf\" String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });",
"<!-- use the worker pool builder to help create the shared thread pool --> <bean id=\"poolBuilder\" class=\"org.apache.camel.component.netty.NettyWorkerPoolBuilder\"> <property name=\"workerCount\" value=\"2\"/> </bean> <!-- the shared worker thread pool --> <bean id=\"sharedPool\" class=\"org.jboss.netty.channel.socket.nio.WorkerPool\" factory-bean=\"poolBuilder\" factory-method=\"build\" destroy-method=\"shutdown\"> </bean>",
"<route> <from uri=\"netty:tcp://0.0.0.0:5021?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>",
"<route> <from uri=\"netty:tcp://0.0.0.0:5022?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-netty-component-starter |
Chapter 2. Alertmanager [monitoring.coreos.com/v1] | Chapter 2. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 2.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigMatcherStrategy object The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the service account has automountServiceAccountToken: true , set the field to false to opt out of automounting API credentials. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead. clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterLabel string Defines the identifier that uniquely identifies the Alertmanager cluster. You should only set it when the Alertmanager cluster includes Alertmanager instances which are external to this Alertmanager resource. In practice, the addresses of the external instances are provided via the .spec.additionalPeers field. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret and mounted into the /etc/alertmanager/config directory in the alertmanager container. If either the secret or the alertmanager.yaml key is missing, the operator provisions a minimal Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. enableFeatures array (string) Enable access to Alertmanager feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. It requires Alertmanager >= 0.27.0. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullPolicy string Image pull policy for the 'alertmanager', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 2.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.54. .spec.alertmanagerConfigMatcherStrategy Description The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. Type object Property Type Description type string If set to OnNamespace , the operator injects a label matcher matching the namespace of the AlertmanagerConfig object for all its routes and inhibition rules. None will not add any additional matchers other than the ones specified in the AlertmanagerConfig. Default is OnNamespace . 2.1.55. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.57. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.58. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.59. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.60. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.61. .spec.alertmanagerConfiguration Description alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 2.1.62. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. opsGenieApiKey object The default OpsGenie API Key. opsGenieApiUrl object The default OpsGenie API URL. pagerdutyUrl string The default Pagerduty URL. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. slackApiUrl object The default Slack API URL. smtp object Configures global SMTP parameters. 2.1.63. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 2.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 2.1.65. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 2.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.68. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.69. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 2.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.74. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.75. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 2.1.76. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.77. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.78. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.79. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.80. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.81. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.82. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.83. .spec.alertmanagerConfiguration.global.opsGenieApiKey Description The default OpsGenie API Key. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.84. .spec.alertmanagerConfiguration.global.opsGenieApiUrl Description The default OpsGenie API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.85. .spec.alertmanagerConfiguration.global.slackApiUrl Description The default Slack API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.86. .spec.alertmanagerConfiguration.global.smtp Description Configures global SMTP parameters. Type object Property Type Description authIdentity string SMTP Auth using PLAIN authPassword object SMTP Auth using LOGIN and PLAIN. authSecret object SMTP Auth using CRAM-MD5. authUsername string SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. from string The default SMTP From header field. hello string The default hostname to identify to the SMTP server. requireTLS boolean The default SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. smartHost object The default SMTP smarthost used for sending emails. 2.1.87. .spec.alertmanagerConfiguration.global.smtp.authPassword Description SMTP Auth using LOGIN and PLAIN. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.88. .spec.alertmanagerConfiguration.global.smtp.authSecret Description SMTP Auth using CRAM-MD5. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.89. .spec.alertmanagerConfiguration.global.smtp.smartHost Description The default SMTP smarthost used for sending emails. Type object Required host port Property Type Description host string Defines the host's address, it can be a DNS name or a literal IP address. port string Defines the host's port, it can be a literal port number or a port name. 2.1.90. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 2.1.91. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.92. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.93. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.94. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.95. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.96. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.97. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.98. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.99. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.100. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.101. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.102. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.103. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.104. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.105. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.106. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.107. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.108. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.109. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.110. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.111. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.112. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.113. .spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.114. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.115. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.116. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.117. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.118. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.119. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.120. .spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.121. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.122. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.123. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.124. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.125. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.126. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.127. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.128. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.129. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.130. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.131. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.132. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.133. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.134. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.135. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.136. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.137. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.138. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.139. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.140. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.141. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.142. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.143. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.144. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.145. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.146. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.147. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.148. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.149. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.150. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.151. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.152. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.153. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.154. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.155. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.156. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.157. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.158. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.159. .spec.hostAliases Description Pods' hostAliases configuration Type array 2.1.160. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 2.1.161. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 2.1.162. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.163. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.164. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.165. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.166. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.167. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.168. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.169. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.170. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.171. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.172. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.173. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.174. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.175. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.176. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.177. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.178. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.179. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.180. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.181. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.182. .spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.183. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.184. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.185. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.186. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.187. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.188. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.189. .spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.190. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.191. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.192. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.193. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.194. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.195. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.196. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.197. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.198. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.199. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.200. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.201. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.202. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.203. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.204. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.205. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.206. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.207. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.208. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.209. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.210. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.211. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.212. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.213. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.214. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.215. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.216. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.217. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.218. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.219. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.220. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.221. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.222. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.223. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.224. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.225. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.226. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.227. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.228. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.229. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.230. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.231. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.232. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.233. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.234. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.235. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 2.1.236. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 2.1.237. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.238. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 2.1.239. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.240. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.241. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.242. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.243. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.244. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.245. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.246. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.247. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.248. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.249. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.250. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 2.1.251. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.252. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.253. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.254. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.255. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.256. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.257. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.258. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.259. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. phase string phase represents the current phase of PersistentVolumeClaim. 2.1.260. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 2.1.261. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 2.1.262. .spec.storage.volumeClaimTemplate.status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 2.1.263. .spec.tolerations Description If specified, the pod's tolerations. Type array 2.1.264. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.265. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 2.1.266. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 2.1.267. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.268. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.269. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.270. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 2.1.271. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.272. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 2.1.273. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 2.1.274. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 2.1.275. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 2.1.276. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 2.1.277. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 2.1.278. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.279. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 2.1.280. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.281. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.282. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.283. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.284. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 2.1.285. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.286. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.287. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 2.1.288. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.289. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.290. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.291. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.292. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.293. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.294. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.295. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.296. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.297. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.298. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.299. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.300. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.301. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.302. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 2.1.303. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 2.1.304. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.305. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 2.1.306. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 2.1.307. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 2.1.308. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 2.1.309. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 2.1.310. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 2.1.311. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.312. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 2.1.313. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 2.1.314. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 2.1.315. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 2.1.316. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 2.1.317. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 2.1.318. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 2.1.319. .spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 2.1.320. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.321. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.322. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.323. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.324. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.325. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.326. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.327. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 2.1.328. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.329. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.330. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.331. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 2.1.332. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.333. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.334. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 2.1.335. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 2.1.336. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 2.1.337. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.338. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 2.1.339. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.340. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 2.1.341. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.342. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.343. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 2.1.344. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.345. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 2.1.346. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description getConcurrency integer Maximum number of GET requests processed concurrently. This corresponds to the Alertmanager's --web.get-concurrency flag. httpConfig object Defines HTTP parameters for web server. timeout integer Timeout for HTTP requests. This corresponds to the Alertmanager's --web.timeout flag. tlsConfig object Defines the TLS parameters for HTTPS. 2.1.347. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 2.1.348. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 2.1.349. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 2.1.350. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.351. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.352. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.353. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.354. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.355. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.356. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.357. .status Description Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager object (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager object. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager object that have the desired version spec. 2.1.358. .status.conditions Description The current state of the Alertmanager object. Type array 2.1.359. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status GET : read status of the specified Alertmanager PATCH : partially update status of the specified Alertmanager PUT : replace status of the specified Alertmanager 2.2.1. /apis/monitoring.coreos.com/v1/alertmanagers HTTP method GET Description list objects of kind Alertmanager Table 2.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers HTTP method DELETE Description delete collection of Alertmanager Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 2.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Alertmanager schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method DELETE Description delete an Alertmanager Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body Alertmanager schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty 2.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method GET Description read status of the specified Alertmanager Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Alertmanager Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Alertmanager Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body Alertmanager schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1 |
Chapter 7. Ceph Monitor and OSD interaction configuration | Chapter 7. Ceph Monitor and OSD interaction configuration As a storage administrator, you must properly configure the interactions between the Ceph Monitors and OSDs to ensure a stable working environment. Prerequisites Installation of the Red Hat Ceph Storage software. 7.1. Ceph Monitor and OSD interaction After you have completed your initial Ceph configuration, you can deploy and run Ceph. When you execute a command such as ceph health or ceph -s , the Ceph Monitor reports on the current state of the Ceph storage cluster. The Ceph Monitor knows about the Ceph storage cluster by requiring reports from each Ceph OSD daemon, and by receiving reports from Ceph OSD daemons about the status of their neighboring Ceph OSD daemons. If the Ceph Monitor does not receive reports, or if it receives reports of changes in the Ceph storage cluster, the Ceph Monitor updates the status of the Ceph cluster map. Ceph provides reasonable default settings for Ceph Monitor and OSD interaction. However, you can override the defaults. The following sections describe how Ceph Monitors and Ceph OSD daemons interact for the purposes of monitoring the Ceph storage cluster. 7.2. OSD heartbeat Each Ceph OSD daemon checks the heartbeat of other Ceph OSD daemons every 6 seconds. To change the heartbeat interval, change the value at runtime: Syntax Example If a neighboring Ceph OSD daemon does not send heartbeat packets within a 20 second grace period, the Ceph OSD daemon might consider the neighboring Ceph OSD daemon down . It can report it back to a Ceph Monitor, which updates the Ceph cluster map. To change the grace period, set the value at runtime: Syntax Example 7.3. Reporting an OSD as down By default, two Ceph OSD Daemons from different hosts must report to the Ceph Monitors that another Ceph OSD Daemon is down before the Ceph Monitors acknowledge that the reported Ceph OSD Daemon is down . However, there is the chance that all the OSDs reporting the failure are in different hosts in a rack with a bad switch that causes connection problems between OSDs. To avoid a "false alarm," Ceph considers the peers reporting the failure as a proxy for a "subcluster" that is similarly laggy. While this is not always the case, it may help administrators localize the grace correction to a subset of the system that is performing poorly. Ceph uses the mon_osd_reporter_subtree_level setting to group the peers into the "subcluster" by their common ancestor type in the CRUSH map. By default, only two reports from a different subtree are required to report another Ceph OSD Daemon down . Administrators can change the number of reporters from unique subtrees and the common ancestor type required to report a Ceph OSD Daemon down to a Ceph Monitor by setting the mon_osd_min_down_reporters and mon_osd_reporter_subtree_level values at runtime: Syntax Example Syntax Example 7.4. Reporting a peering failure If a Ceph OSD daemon cannot peer with any of the Ceph OSD daemons defined in its Ceph configuration file or the cluster map, it pings a Ceph Monitor for the most recent copy of the cluster map every 30 seconds. You can change the Ceph Monitor heartbeat interval by setting the value at runtime: Syntax Example 7.5. OSD reporting status If a Ceph OSD Daemon does not report to a Ceph Monitor, the Ceph Monitor marks the Ceph OSD Daemon down after the mon_osd_report_timeout , which is 900 seconds, elapses. A Ceph OSD Daemon sends a report to a Ceph Monitor when a reportable event such as a failure, a change in placement group stats, a change in up_thru or when it boots within 5 seconds. You can change the Ceph OSD Daemon minimum report interval by setting the osd_mon_report_interval value at runtime: Syntax To get, set, and verify the config you can use the following example: Example Additional resources See all the Red Hat Ceph Storage Ceph Monitor and OSD configuration options in Ceph Monitor and OSD configuration options for specific option descriptions and usage. | [
"ceph config set osd osd_heartbeat_interval TIME_IN_SECONDS",
"ceph config set osd osd_heartbeat_interval 60",
"ceph config set osd osd_heartbeat_grace TIME_IN_SECONDS",
"ceph config set osd osd_heartbeat_grace 30",
"ceph config set mon mon_osd_min_down_reporters NUMBER",
"ceph config set mon mon_osd_min_down_reporters 4",
"ceph config set mon mon_osd_reporter_subtree_level CRUSH_ITEM",
"ceph config set mon mon_osd_reporter_subtree_level host ceph config set mon mon_osd_reporter_subtree_level rack ceph config set mon mon_osd_reporter_subtree_level osd",
"ceph config set osd osd_mon_heartbeat_interval TIME_IN_SECONDS",
"ceph config set osd osd_mon_heartbeat_interval 60",
"ceph config set osd osd_mon_report_interval TIME_IN_SECONDS",
"ceph config get osd osd_mon_report_interval 5 ceph config set osd osd_mon_report_interval 20 ceph config dump | grep osd global advanced osd_pool_default_crush_rule -1 osd basic osd_memory_target 4294967296 osd advanced osd_mon_report_interval 20"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/ceph-monitor-and-osd-interaction-configuration |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/providing-feedback-on-red-hat-documentation_common |
Installing GitOps | Installing GitOps Red Hat OpenShift GitOps 1.12 Installing the OpenShift GitOps Operator, logging in to the Argo CD instance, and installing the GitOps CLI Red Hat OpenShift Documentation Team | [
"edit argocd <name of argo cd> -n namespace",
"oc label namespace <namespace> openshift.io/cluster-monitoring=true",
"namespace/<namespace> labeled",
"oc create ns openshift-gitops-operator",
"namespace/openshift-gitops-operator created",
"oc label namespace <namespace> openshift.io/cluster-monitoring=true",
"namespace/<namespace> labeled",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: upgradeStrategy: Default",
"oc apply -f gitops-operator-group.yaml",
"operatorgroup.operators.coreos.com/openshift-gitops-operator created",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: channel: latest 1 installPlanApproval: Automatic name: openshift-gitops-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f openshift-gitops-sub.yaml",
"subscription.operators.coreos.com/openshift-gitops-operator created",
"oc get pods -n openshift-gitops",
"NAME READY STATUS RESTARTS AGE cluster-b5798d6f9-zr576 1/1 Running 0 65m kam-69866d7c48-8nsjv 1/1 Running 0 65m openshift-gitops-application-controller-0 1/1 Running 0 53m openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m openshift-gitops-dex-server-569b498bd9-vf6mr 1/1 Running 0 65m openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m",
"oc get pods -n openshift-gitops-operator",
"NAME READY STATUS RESTARTS AGE openshift-gitops-operator-controller-manager-664966d547-vr4vb 2/2 Running 0 65m",
"tar xvzf <file>",
"sudo mv argocd /usr/local/bin/argocd",
"sudo chmod +x /usr/local/bin/argocd",
"argocd version --client",
"argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*gitops*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-x86_64-rpms\"",
"subscription-manager repos --enable=\"gitops-1.12-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-s390x-rpms\"",
"subscription-manager repos --enable=\"gitops-1.12-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-ppc64le-rpms\"",
"subscription-manager repos --enable=\"gitops-1.12-for-rhel-8-ppc64le-rpms\"",
"subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-aarch64-rpms\"",
"subscription-manager repos --enable=\"gitops-1.12-for-rhel-8-aarch64-rpms\"",
"yum install openshift-gitops-argocd-cli",
"argocd version --client",
"argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1",
"C:\\> move argocd.exe <directory>",
"argocd version --client",
"argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1",
"tar xvzf <file>",
"sudo mv argocd /usr/local/bin/argocd",
"sudo chmod +x /usr/local/bin/argocd",
"argocd version --client",
"argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html-single/installing_gitops/index |
Chapter 6. Configuring metrics for the monitoring stack | Chapter 6. Configuring metrics for the monitoring stack As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks: Create a Prometheus ServiceMonitor CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. 6.1. Configuration for sending metrics to the monitoring stack You can configure the OpenTelemetryCollector custom resource (CR) to create a Prometheus ServiceMonitor CR or a PodMonitor CR for a sidecar deployment. A ServiceMonitor can scrape Collector's internal metrics endpoint and Prometheus exporter metrics endpoints. Example of the OpenTelemetry Collector CR with the Prometheus exporter apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: ":8888" pipelines: metrics: exporters: [prometheus] 1 Configures the Red Hat build of OpenTelemetry Operator to create the Prometheus ServiceMonitor CR or PodMonitor CR to scrape the Collector's internal metrics endpoint and the Prometheus exporter metrics endpoints. Note Setting enableMetrics to true creates the following two ServiceMonitor instances: One ServiceMonitor instance for the <instance_name>-collector-monitoring service. This ServiceMonitor instance scrapes the Collector's internal metrics. One ServiceMonitor instance for the <instance_name>-collector service. This ServiceMonitor instance scrapes the metrics exposed by the Prometheus exporter instances. Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping. Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job 1 The name of the OpenTelemetry Collector CR. 2 The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics . 3 The name of the Prometheus exporter port for the OpenTelemetry Collector. 6.2. Configuration for receiving metrics from the monitoring stack A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: "true" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__="<metric_name>"}' 4 metrics_path: '/federate' static_configs: - targets: - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] 1 Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data. 2 Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver. 3 Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack. 4 Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint. 5 Configures the debug exporter to print the metrics to the standard output. 6.3. Additional resources Querying metrics by using the federation endpoint for Prometheus | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/otel-configuring-metrics-for-monitoring-stack |
Chapter 4. Handling a data center failure | Chapter 4. Handling a data center failure As a storage administrator, you can take preventive measures to avoid a data center failure. These preventive measures include: Configuring the data center infrastructure. Setting up failure domains within the CRUSH map hierarchy. Designating failure nodes within the domains. 4.1. Prerequisites A healthy running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. 4.2. Avoiding a data center failure Configuring the data center infrastructure Each data center within a stretch cluster can have a different storage cluster configuration to reflect local capabilities and dependencies. Set up replication between the data centers to help preserve the data. If one data center fails, the other data centers in the storage cluster contain copies of the data. Setting up failure domains within the CRUSH map hierarchy Failure, or failover, domains are redundant copies of domains within the storage cluster. If an active domain fails, the failure domain becomes the active domain. By default, the CRUSH map lists all nodes in a storage cluster within a flat hierarchy. However, for best results, create a logical hierarchical structure within the CRUSH map. The hierarchy designates the domains to which each node belongs and the relationships among those domains within the storage cluster, including the failure domains. Defining the failure domains for each domain within the hierarchy improves the reliability of the storage cluster. When planning a storage cluster that contains multiple data centers, place the nodes within the CRUSH map hierarchy so that if one data center goes down, the rest of the storage cluster stays up and running. Designating failure nodes within the domains If you plan to use three-way replication for data within the storage cluster, consider the location of the nodes within the failure domain. If an outage occurs within a data center, it is possible that some data might reside in only one copy. When this scenario happens, there are two options: Leave the data in read-only status with the standard settings. Live with only one copy for the duration of the outage. With the standard settings, and because of the randomness of data placement across the nodes, not all the data will be affected, but some data can have only one copy and the storage cluster would revert to read-only mode. However, if some data exist in only one copy, the storage cluster reverts to read-only mode. 4.3. Handling a data center failure Red Hat Ceph Storage can withstand catastrophic failures to the infrastructure, such as losing one of the data centers in a stretch cluster. For the standard object store use case, configuring all three data centers can be done independently with replication set up between them. In this scenario, the storage cluster configuration in each of the data centers might be different, reflecting the local capabilities and dependencies. A logical structure of the placement hierarchy should be considered. A proper CRUSH map can be used, reflecting the hierarchical structure of the failure domains within the infrastructure. Using logical hierarchical definitions improves the reliability of the storage cluster, versus using the standard hierarchical definitions. Failure domains are defined in the CRUSH map. The default CRUSH map contains all nodes in a flat hierarchy. In a three data center environment, such as a stretch cluster, the placement of nodes should be managed in a way that one data center can go down, but the storage cluster stays up and running. Consider which failure domain a node resides in when using 3-way replication for the data. In the example below, the resulting map is derived from the initial setup of the storage cluster with 6 OSD nodes. In this example, all nodes have only one disk and hence one OSD. All of the nodes are arranged under the default root , that is the standard root of the hierarchy tree. Because there is a weight assigned to two of the OSDs, these OSDs receive fewer chunks of data than the other OSDs. These nodes were introduced later with bigger disks than the initial OSD disks. This does not affect the data placement to withstand a failure of a group of nodes. Example Using logical hierarchical definitions to group the nodes into same data center can achieve data placement maturity. Possible definition types of root , datacenter , rack , row and host allow the reflection of the failure domains for the three data center stretch cluster: Nodes ceph-node1 and ceph-node2 reside in data center 1 (DC1) Nodes ceph-node3 and ceph-node5 reside in data center 2 (DC2) Nodes ceph-node4 and ceph-node6 reside in data center 3 (DC3) All data centers belong to the same structure (allDC) Since all OSDs in a host belong to the host definition there is no change needed. All the other assignments can be adjusted during runtime of the storage cluster by: Defining the bucket structure with the following commands: Moving the nodes into the appropriate place within this structure by modifying the CRUSH map: Within this structure any new hosts can be added too, as well as new disks. By placing the OSDs at the right place in the hierarchy the CRUSH algorithm is changed to place redundant pieces into different failure domains within the structure. The above example results in the following: Example The listing from above shows the resulting CRUSH map by displaying the osd tree. Easy to see is now how the hosts belong to a data center and all data centers belong to the same top level structure but clearly distinguishing between locations. Note Placing the data in the proper locations according to the map works only properly within the healthy cluster. Misplacement might happen under circumstances, when some OSDs are not available. Those misplacements will be corrected automatically once it is possible to do so. Additional Resources See the CRUSH administration chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information. | [
"ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.33554 root default -2 0.04779 host ceph-node3 0 0.04779 osd.0 up 1.00000 1.00000 -3 0.04779 host ceph-node2 1 0.04779 osd.1 up 1.00000 1.00000 -4 0.04779 host ceph-node1 2 0.04779 osd.2 up 1.00000 1.00000 -5 0.04779 host ceph-node4 3 0.04779 osd.3 up 1.00000 1.00000 -6 0.07219 host ceph-node6 4 0.07219 osd.4 up 0.79999 1.00000 -7 0.07219 host ceph-node5 5 0.07219 osd.5 up 0.79999 1.00000",
"ceph osd crush add-bucket allDC root ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter ceph osd crush add-bucket DC3 datacenter",
"ceph osd crush move DC1 root=allDC ceph osd crush move DC2 root=allDC ceph osd crush move DC3 root=allDC ceph osd crush move ceph-node1 datacenter=DC1 ceph osd crush move ceph-node2 datacenter=DC1 ceph osd crush move ceph-node3 datacenter=DC2 ceph osd crush move ceph-node5 datacenter=DC2 ceph osd crush move ceph-node4 datacenter=DC3 ceph osd crush move ceph-node6 datacenter=DC3",
"ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -8 6.00000 root allDC -9 2.00000 datacenter DC1 -4 1.00000 host ceph-node1 2 1.00000 osd.2 up 1.00000 1.00000 -3 1.00000 host ceph-node2 1 1.00000 osd.1 up 1.00000 1.00000 -10 2.00000 datacenter DC2 -2 1.00000 host ceph-node3 0 1.00000 osd.0 up 1.00000 1.00000 -7 1.00000 host ceph-node5 5 1.00000 osd.5 up 0.79999 1.00000 -11 2.00000 datacenter DC3 -6 1.00000 host ceph-node6 4 1.00000 osd.4 up 0.79999 1.00000 -5 1.00000 host ceph-node4 3 1.00000 osd.3 up 1.00000 1.00000 -1 0 root default"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/operations_guide/handling-a-data-center-failure |
Chapter 12. Scalability and performance optimization | Chapter 12. Scalability and performance optimization 12.1. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 12.1.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 12.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in the OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. 12.1.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 12.2. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 12.1.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 12.1.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 12.1.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 12.1.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 12.1.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 12.1.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 12.1.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 12.1.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 12.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 12.1.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. 12.2. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 12.2.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . You can modify the Ingress Controller deployment by using the information provided in Setting Ingress Controller thread count for threads and Ingress Controller configuration parameters for timeouts, and other tuning configurations in the Ingress Controller specification. 12.2.2. Configuring Ingress Controller liveness, readiness, and startup probes Cluster administrators can configure the timeout values for the kubelet's liveness, readiness, and startup probes for router deployments that are managed by the OpenShift Container Platform Ingress Controller (router). The liveness and readiness probes of the router use the default timeout value of 1 second, which is too brief when networking or runtime performance is severely degraded. Probe timeouts can cause unwanted router restarts that interrupt application connections. The ability to set larger timeout values can reduce the risk of unnecessary and unwanted restarts. You can update the timeoutSeconds value on the livenessProbe , readinessProbe , and startupProbe parameters of the router container. Parameter Description livenessProbe The livenessProbe reports to the kubelet whether a pod is dead and needs to be restarted. readinessProbe The readinessProbe reports whether a pod is healthy or unhealthy. When the readiness probe reports an unhealthy pod, then the kubelet marks the pod as not ready to accept traffic. Subsequently, the endpoints for that pod are marked as not ready, and this status propagates to the kube-proxy. On cloud platforms with a configured load balancer, the kube-proxy communicates to the cloud load-balancer not to send traffic to the node with that pod. startupProbe The startupProbe gives the router pod up to 2 minutes to initialize before the kubelet begins sending the router liveness and readiness probes. This initialization time can prevent routers with many routes or endpoints from prematurely restarting. Important The timeout configuration option is an advanced tuning technique that can be used to work around issues. However, these issues should eventually be diagnosed and possibly a support case or Jira issue opened for any issues that causes probes to time out. The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes: USD oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}' Verification USD oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3 12.2.3. Configuring HAProxy reload interval When you update a route or an endpoint associated with a route, the OpenShift Container Platform router updates the configuration for HAProxy. Then, HAProxy reloads the updated configuration for those changes to take effect. When HAProxy reloads, it generates a new process that handles new connections using the updated configuration. HAProxy keeps the old process running to handle existing connections until those connections are all closed. When old processes have long-lived connections, these processes can accumulate and consume resources. The default minimum HAProxy reload interval is five seconds. You can configure an Ingress Controller using its spec.tuningOptions.reloadInterval field to set a longer minimum reload interval. Warning Setting a large value for the minimum HAProxy reload interval can cause latency in observing updates to routes and their endpoints. To lessen the risk, avoid setting a value larger than the tolerable latency for updates. Procedure Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}' 12.3. Optimizing networking OVN-Kubernetes uses Generic Network Virtualization Encapsulation (Geneve) a protocol similar to Geneve to tunnel traffic between nodes. This network can be tuned by using network interface controller (NIC) offloads. Geneve provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. Geneve encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in Geneve-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on Geneve or VXLAN tunnels will perform similarly because of the tunnel implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use Geneve-offload capable network adapters. Geneve-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. Geneve-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 12.3.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is configured at the time of OpenShift Container Platform installation, and you can also change the cluster's MTU as a Day 2 operation. See "Changing cluster network MTU" for more information. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Additional resources Changing cluster network MTU 12.3.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 12.3.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. 12.3.4. Additional resources Specifying advanced network configuration Cluster Network Operator configuration Improving cluster stability in high latency environments using worker latency profiles 12.4. Optimizing CPU usage with mount namespace encapsulation You can optimize CPU usage in OpenShift Container Platform clusters by using mount namespace encapsulation to provide a private namespace for kubelet and CRI-O processes. This reduces the cluster CPU resources used by systemd with no difference in functionality. Important Mount namespace encapsulation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 12.4.1. Encapsulating mount namespaces Mount namespaces are used to isolate mount points so that processes in different namespaces cannot view each others' files. Encapsulation is the process of moving Kubernetes mount namespaces to an alternative location where they will not be constantly scanned by the host operating system. The host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of kubelet and CRI-O both use the top-level namespace for all container runtime and kubelet mount points. However, encapsulating these container-specific mount points in a private namespace reduces systemd overhead with no difference in functionality. Using a separate mount namespace for both CRI-O and kubelet can encapsulate container-specific mounts from any systemd or other host operating system interaction. This ability to potentially achieve major CPU optimization is now available to all OpenShift Container Platform administrators. Encapsulation can also improve security by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. The following diagrams illustrate a Kubernetes installation before and after encapsulation. Both scenarios show example containers which have mount propagation settings of bidirectional, host-to-container, and none. Here we see systemd, host operating system processes, kubelet, and the container runtime sharing a single mount namespace. systemd, host operating system processes, kubelet, and the container runtime each have access to and visibility of all mount points. Container 1, configured with bidirectional mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 1, such as /run/a is visible to systemd, host operating system processes, kubelet, container runtime, and other containers with host-to-container or bidirectional mount propagation configured (as in Container 2). Container 2, configured with host-to-container mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 2, such as /run/b , is not visible to any other context. Container 3, configured with no mount propagation, has no visibility of external mount points. A mount originating in Container 3, such as /run/c , is not visible to any other context. The following diagram illustrates the system state after encapsulation. The main systemd process is no longer devoted to unnecessary scanning of Kubernetes-specific mount points. It only monitors systemd-specific and host mount points. The host operating system processes can access only the systemd and host mount points. Using a separate mount namespace for both CRI-O and kubelet completely separates all container-specific mounts away from any systemd or other host operating system interaction whatsoever. The behavior of Container 1 is unchanged, except a mount it creates such as /run/a is no longer visible to systemd or host operating system processes. It is still visible to kubelet, CRI-O, and other containers with host-to-container or bidirectional mount propagation configured (like Container 2). The behavior of Container 2 and Container 3 is unchanged. 12.4.2. Configuring mount namespace encapsulation You can configure mount namespace encapsulation so that a cluster runs with less resource overhead. Note Mount namespace encapsulation is a Technology Preview feature and it is disabled by default. To use it, you must enable the feature manually. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a file called mount_namespace_config.yaml with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service Apply the mount namespace MachineConfig CR by running the following command: USD oc apply -f mount_namespace_config.yaml Example output machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created The MachineConfig CR can take up to 30 minutes to finish being applied in the cluster. You can check the status of the MachineConfig CR by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1 Wait for the MachineConfig CR to be applied successfully across all control plane and worker nodes after running the following command: USD oc wait --for=condition=Updated mcp --all --timeout=30m Example output machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met Verification To verify encapsulation for a cluster host, run the following commands: Open a debug shell to the cluster host: USD oc debug node/<node_name> Open a chroot session: sh-4.4# chroot /host Check the systemd mount namespace: sh-4.4# readlink /proc/1/ns/mnt Example output mnt:[4026531953] Check kubelet mount namespace: sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt Example output mnt:[4026531840] Check the CRI-O mount namespace: sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt Example output mnt:[4026531840] These commands return the mount namespaces associated with systemd, kubelet, and the container runtime. In OpenShift Container Platform, the container runtime is CRI-O. Encapsulation is in effect if systemd is in a different mount namespace to kubelet and CRI-O as in the above example. Encapsulation is not in effect if all three processes are in the same mount namespace. 12.4.3. Inspecting encapsulated namespaces You can inspect Kubernetes-specific mount points in the cluster host operating system for debugging or auditing purposes by using the kubensenter script that is available in Red Hat Enterprise Linux CoreOS (RHCOS). SSH shell sessions to the cluster host are in the default namespace. To inspect Kubernetes-specific mount points in an SSH shell prompt, you need to run the kubensenter script as root. The kubensenter script is aware of the state of the mount encapsulation, and is safe to run even if encapsulation is not enabled. Note oc debug remote shell sessions start inside the Kubernetes namespace by default. You do not need to run kubensenter to inspect mount points when you use oc debug . If the encapsulation feature is not enabled, the kubensenter findmnt and findmnt commands return the same output, regardless of whether they are run in an oc debug session or in an SSH shell prompt. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured SSH access to the cluster host. Procedure Open a remote SSH shell to the cluster host. For example: USD ssh core@<node_name> Run commands using the provided kubensenter script as the root user. To run a single command inside the Kubernetes namespace, provide the command and any arguments to the kubensenter script. For example, to run the findmnt command inside the Kubernetes namespace, run the following command: [core@control-plane-1 ~]USD sudo kubensenter findmnt Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs ... To start a new interactive shell inside the Kubernetes namespace, run the kubensenter script without any arguments: [core@control-plane-1 ~]USD sudo kubensenter Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt 12.4.4. Running additional services in the encapsulated namespace Any monitoring tool that relies on the ability to run in the host operating system and have visibility of mount points created by kubelet, CRI-O, or containers themselves, must enter the container mount namespace to see these mount points. The kubensenter script that is provided with OpenShift Container Platform executes another command inside the Kubernetes mount point and can be used to adapt any existing tools. The kubensenter script is aware of the state of the mount encapsulation feature status, and is safe to run even if encapsulation is not enabled. In that case the script executes the provided command in the default mount namespace. For example, if a systemd service needs to run inside the new Kubernetes mount namespace, edit the service file and use the ExecStart= command line with kubensenter . [Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2 12.4.5. Additional resources What are namespaces Manage containers in namespaces by using nsenter MachineConfig | [
"oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'",
"oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service",
"oc apply -f mount_namespace_config.yaml",
"machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1",
"oc wait --for=condition=Updated mcp --all --timeout=30m",
"machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# readlink /proc/1/ns/mnt",
"mnt:[4026531953]",
"sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt",
"mnt:[4026531840]",
"sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt",
"mnt:[4026531840]",
"ssh core@<node_name>",
"[core@control-plane-1 ~]USD sudo kubensenter findmnt",
"kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs",
"[core@control-plane-1 ~]USD sudo kubensenter",
"kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt",
"[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/scalability-and-performance-optimization |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/making_open_source_more_inclusive |
Chapter 4. Accessing the registry | Chapter 4. Accessing the registry Use the following sections for instructions on accessing the registry, including viewing logs and metrics, as well as securing and exposing the registry. You can access the registry directly to invoke podman commands. This allows you to push images to or pull them from the integrated registry directly using operations like podman push or podman pull . To do so, you must be logged in to the registry using the podman login command. The operations you can perform depend on your user permissions, as described in the following sections. 4.1. Prerequisites You have access to the cluster as a user with the cluster-admin role. You must have configured an identity provider (IDP). For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role, run the following command: USD oc policy add-role-to-user registry-viewer <user_name> For writing or pushing images, for example when using the podman push command: The user must have the registry-editor role. To add this role, run the following command: USD oc policy add-role-to-user registry-editor <user_name> Your cluster must have an existing project where the images can be pushed to. 4.2. Accessing the registry directly from the cluster You can access the registry from inside the cluster. Procedure Access the registry from the cluster by using internal routes: Access the node by getting the node's name: USD oc get nodes USD oc debug nodes/<node_name> To enable access to tools such as oc and podman on the node, change your root directory to /host : sh-4.2# chroot /host Log in to the container image registry by using your access token: sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 You should see a message confirming login, such as: Login Succeeded! Note You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure. Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name> . Perform podman pull and podman push operations against your registry: Important You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project. In the following examples, use: Component Value <registry_ip> 172.30.124.220 <port> 5000 <project> openshift <image> image <tag> omitted (defaults to latest ) Pull an arbitrary image: sh-4.2# podman pull <name.io>/<image> Tag the new image with the form <registry_ip>:<port>/<project>/<image> . The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry: sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image> Note You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the step will fail. To test, you can create a new project to push the image. Push the newly tagged image to your registry: sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image> Note When pushing images to the internal registry, the repository name must use the <project>/<name> format. Using multiple project levels in the repository name results in an authentication error. 4.3. Checking the status of the registry pods As a cluster administrator, you can list the image registry pods running in the openshift-image-registry project and check their status. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure List the pods in the openshift-image-registry project and view their status: USD oc get pods -n openshift-image-registry Example output NAME READY STATUS RESTARTS AGE image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m 4.4. Viewing registry logs You can view the logs for the registry by using the oc logs command. Procedure Use the oc logs command with deployments to view the logs for the container image registry: USD oc logs deployments/image-registry -n openshift-image-registry Example output 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 4.5. Accessing registry metrics The OpenShift Container Registry provides an endpoint for Prometheus metrics . Prometheus is a stand-alone, open source systems monitoring and alerting toolkit. The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. Procedure You can access the metrics by running a metrics query using a cluster role. Cluster role Create a cluster role if you do not already have one to access the metrics: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF Add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user prometheus-scraper <username> Metrics query Get the user token. openshift: USD oc whoami -t Run a metrics query in node or inside a pod, for example: USD curl --insecure -s -u <user>:<secret> \ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20 Example output # HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. # TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit="9f72191",gitVersion="v3.11.0+9f72191-135-dirty",major="3",minor="11+"} 1 # HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. # TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type="Hit"} 5 imageregistry_digest_cache_requests_total{type="Miss"} 24 # HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. # TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type="Hit"} 33 imageregistry_digest_cache_scoped_requests_total{type="Miss"} 44 # HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. # TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 # HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. # TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method="get",quantile="0.5"} 0.01296087 imageregistry_http_request_duration_seconds{method="get",quantile="0.9"} 0.014847248 imageregistry_http_request_duration_seconds{method="get",quantile="0.99"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method="get"} 12.260727916000022 1 The <user> object can be arbitrary, but <secret> tag must use the user token. 4.6. Additional resources For more information on allowing pods in a project to reference images in another project, see Allowing pods to reference images across projects . A kubeadmin can access the registry until deleted. See Removing the kubeadmin user for more information. For more information on configuring an identity provider, see Understanding identity provider configuration . | [
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/<image>",
"sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: oc whoami -t",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/registry/accessing-the-registry |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/using_your_subscription |
Chapter 4. Staggered upgrade | Chapter 4. Staggered upgrade As a storage administrator, you can upgrade Red Hat Ceph Storage components in phases rather than all at once. The ceph orch upgrade command enables you to specify options to limit which daemons are upgraded by a single upgrade command. Note If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager ( ceph-mgr ) daemons. For more information on performing a staggered upgrade from releases, see Performing a staggered upgrade from releases . 4.1. Staggered upgrade options The ceph orch upgrade command supports several options to upgrade cluster components in phases. The staggered upgrade options include: --daemon_types : The --daemon_types option takes a comma-separated list of daemon types and will only upgrade daemons of those types. Valid daemon types for this option include mgr , mon , crash , osd , mds , rgw , rbd-mirror , cephfs-mirror , and nfs . --services : The --services option is mutually exclusive with --daemon-types , only takes services of one type at a time, and will only upgrade daemons belonging to those services. For example, you cannot provide an OSD and RGW service simultaneously. --hosts : You can combine the --hosts option with --daemon_types , --services , or use it on its own. The --hosts option parameter follows the same format as the command line options for orchestrator CLI placement specification. --limit : The --limit option takes an integer greater than zero and provides a numerical limit on the number of daemons cephadm will upgrade. You can combine the --limit option with --daemon_types , --services , or --hosts . For example, if you specify to upgrade daemons of type osd on host01 with a limit set to 3 , cephadm will upgrade up to three OSD daemons on host01. 4.1.1. Performing a staggered upgrade As a storage administrator, you can use the ceph orch upgrade options to limit which daemons are upgraded by a single upgrade command. Cephadm strictly enforces an order for the upgrade of daemons that is still present in staggered upgrade scenarios. The current upgrade order is: Ceph Manager nodes Ceph Monitor nodes Ceph-crash daemons Ceph OSD nodes Ceph Metadata Server (MDS) nodes Ceph Object Gateway (RGW) nodes Ceph RBD-mirror node CephFS-mirror node Ceph NFS nodes Note If you specify parameters that upgrade daemons out of order, the upgrade command blocks and notes which daemons you need to upgrade before you proceed. Example Note There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool. Prerequisites A cluster running Red Hat Ceph Storage 5.3 or 6.1. Root-level access to all the nodes. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Procedure Log into the cephadm shell: Example Ensure all the hosts are online and that the storage cluster is healthy: Example Set the OSD noout , noscrub , and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster: Example Check service versions and the available target containers: Syntax Example Upgrade the storage cluster: To upgrade specific daemon types on specific hosts: Syntax Example To specify specific services and limit the number of daemons to upgrade: Syntax Example Note In staggered upgrade scenarios, if using a limiting parameter, the monitoring stack daemons, including Prometheus and node-exporter , are refreshed after the upgrade of the Ceph Manager daemons. As a result of the limiting parameter, Ceph Manager upgrades take longer to complete. The versions of monitoring stack daemons might not change between Ceph releases, in which case, they are only redeployed. Note Upgrade commands with limiting parameters validates the options before beginning the upgrade, which can require pulling the new container image. As a result, the upgrade start command might take a while to return when you provide limiting parameters. To see which daemons you still need to upgrade, run the ceph orch upgrade check or ceph versions command: Example To complete the staggered upgrade, verify the upgrade of all remaining services: Syntax Example Verification Verify the new IMAGE_ID and VERSION of the Ceph cluster: Example When the upgrade is complete, unset the noout , noscrub , and nodeep-scrub flags: Example 4.1.2. Performing a staggered upgrade from releases You can perform a staggered upgrade on your storage cluster by providing the necessary arguments You can perform a staggered upgrade on your storage cluster by providing the necessary arguments. If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager ( ceph-mgr ) daemons. Once you have upgraded the Ceph Manager daemons, you can pass the limiting parameters to complete the staggered upgrade. Important Verify you have at least two running Ceph Manager daemons before attempting this procedure. Prerequisites A cluster running Red Hat Ceph Storage 5.2 or lesser. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Procedure Log into the Cephadm shell: Example Determine which Ceph Manager is active and which are standby: Example Manually upgrade each standby Ceph Manager daemon: Syntax Example Fail over to the upgraded standby Ceph Manager: Example Check that the standby Ceph Manager is now active: Example Verify that the active Ceph Manager is upgraded to the new version: Syntax Example Repeat steps 2 - 6 to upgrade the remaining Ceph Managers to the new version. Check that all Ceph Managers are upgraded to the new version: Example Once you upgrade all your Ceph Managers, you can specify the limiting parameters and complete the remainder of the staggered upgrade. Additional Resources For more information about performing a staggered upgrade and staggered upgrade options, see Performing a staggered upgrade . | [
"ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --hosts host02 Error EINVAL: Cannot start upgrade. Daemons with types earlier in upgrade order than daemons on given host need upgrading. Please first upgrade mon.ceph-host01",
"cephadm shell",
"ceph -s",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph orch upgrade check IMAGE_NAME",
"ceph orch upgrade check registry.redhat.io/rhceph/rhceph-7-rhel9:latest",
"ceph orch upgrade start --image IMAGE_NAME --daemon-types DAEMON_TYPE1 , DAEMON_TYPE2 --hosts HOST1 , HOST2",
"ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --daemon-types mgr,mon --hosts host02,host03",
"ceph orch upgrade start --image IMAGE_NAME --services SERVICE1 , SERVICE2 --limit LIMIT_NUMBER",
"ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --services rgw.example1,rgw1.example2 --limit 2",
"ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest",
"ceph orch upgrade start --image IMAGE_NAME",
"ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest",
"ceph versions ceph orch ps",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"cephadm shell",
"ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 92s) mgr: host01.ndtpjh(active, since 16h), standbys: host02.pzgrhz",
"ceph orch daemon redeploy mgr.ceph- HOST . MANAGER_ID --image IMAGE_ID",
"ceph orch daemon redeploy mgr.ceph-host02.pzgrhz --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest",
"ceph mgr fail",
"ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 1h) mgr: host02.pzgrhz(active, since 25s), standbys: host01.ndtpjh",
"ceph tell mgr.ceph- HOST . MANAGER_ID version",
"ceph tell mgr.host02.pzgrhz version { \"version\": \"18.2.0-128.el8cp\", \"release\": \"reef\", \"release_type\": \"stable\" }",
"ceph mgr versions { \"ceph version 18.2.0-128.el8cp (600e227816517e2da53d85f2fab3cd40a7483372) pacific (stable)\": 2 }"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/upgrade_guide/staggered-upgrade |
Chapter 3. Managing externally signed certificates for IdM users, hosts, and services | Chapter 3. Managing externally signed certificates for IdM users, hosts, and services This chapter describes how to use the Identity Management (IdM) command-line interface (CLI) and the IdM Web UI to add or remove user, host, or service certificates that were issued by an external certificate authority (CA). 3.1. Adding a certificate issued by an external CA to an IdM user, host, or service by using the IdM CLI As an Identity Management (IdM) administrator, you can add an externally signed certificate to the account of an IdM user, host, or service by using the Identity Management (IdM) CLI. Prerequisites You have obtained the ticket-granting ticket of an administrative user. Procedure To add a certificate to an IdM user, enter: The command requires you to specify the following information: The name of the user The Base64-encoded DER certificate Note Instead of copying and pasting the certificate contents into the command line, you can convert the certificate to the DER format and then re-encode it to Base64. For example, to add the user_cert.pem certificate to user , enter: You can run the ipa user-add-cert command interactively by executing it without adding any options. To add a certificate to an IdM host, enter: ipa host-add-cert To add a certificate to an IdM service, enter: ipa service-add-cert Additional resources Managing certificates for users, hosts, and services using the integrated IdM CA 3.2. Adding a certificate issued by an external CA to an IdM user, host, or service by using the IdM Web UI As an Identity Management (IdM) administrator, you can add an externally signed certificate to the account of an IdM user, host, or service by using the Identity Management (IdM) Web UI. Prerequisites You are logged in to the Identity Management (IdM) Web UI as an administrative user. Procedure Open the Identity tab, and select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Click Add to the Certificates entry. Figure 3.1. Adding a certificate to a user account Paste the certificate in Base64 or PEM encoded format into the text field, and click Add . Click Save to store the changes. 3.3. Removing a certificate issued by an external CA from an IdM user, host, or service account by using the IdM CLI As an Identity Management (IdM) administrator, you can remove an externally signed certificate from the account of an IdM user, host, or service by using the Identity Management (IdM) CLI . Prerequisites You have obtained the ticket-granting ticket of an administrative user. Procedure To remove a certificate from an IdM user, enter: The command requires you to specify the following information: The name of the user The Base64-encoded DER certificate Note Instead of copying and pasting the certificate contents into the command line, you can convert the certificate to the DER format and then re-encode it to Base64. For example, to remove the user_cert.pem certificate from user , enter: You can run the ipa user-remove-cert command interactively by executing it without adding any options. To remove a certificate from an IdM host, enter: ipa host-remove-cert To remove a certificate from an IdM service, enter: ipa service-remove-cert Additional resources Managing certificates for users, hosts, and services using the integrated IdM CA 3.4. Removing a certificate issued by an external CA from an IdM user, host, or service account by using the IdM Web UI As an Identity Management (IdM) administrator, you can remove an externally signed certificate from the account of an IdM user, host, or service by using the Identity Management (IdM) Web UI. Prerequisites You are logged in to the Identity Management (IdM) Web UI as an administrative user. Procedure Open the Identity tab, and select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Click the Actions to the certificate to delete, and select Delete . Click Save to store the changes. 3.5. Additional resources Ensuring the presence of an externally signed certificate in an IdM service entry using an Ansible playbook | [
"ipa user-add-cert user --certificate= MIQTPrajQAwg",
"ipa user-add-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"",
"ipa user-remove-cert user --certificate= MIQTPrajQAwg",
"ipa user-remove-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/managing-externally-signed-certificates-for-idm-users-hosts-and-services_working-with-idm-certificates |
probe::scheduler.migrate | probe::scheduler.migrate Name probe::scheduler.migrate - Task migrating across cpus Synopsis scheduler.migrate Values priority priority of the task being migrated cpu_to the destination cpu cpu_from the original cpu task the process that is being migrated name name of the probe point pid PID of the task being migrated | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-migrate |
Chapter 29. Graphics Driver and Miscellaneous Driver Updates | Chapter 29. Graphics Driver and Miscellaneous Driver Updates The HDA driver has been updated to the latest upstream version to use the new jack kctls method. The HPI driver has been updated to version 4.14. The Realtek HD-audio codec driver has been updated to include the update of EAPD init codes. The IPMI driver has been updated to replace the timespec usage by timespec64. The i915 driver has been updated to include the rebase of ACPI Video Extensions driver in Red Hat Enterprise Linux 7.2. The ACPI Fan driver has been updated to version 0.25. The Update NVM-Express driver has been updated to version 3.19. The rtsx driver has been updated to version 4.0 to support rtl8402, rts524A, rts525A chips. The Generic WorkQueue Engine device driver has been updated to the latest upstream version. The PCI driver has been updated to version 3.16. The EDAC kernel module has been updated to provide support for Intel Xeon v4 processors. The pstate driver has been updated to support 6th Generation Intel Core processor. The intel_idle driver has been updated to support 6th Generation Intel Core processor. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/misc_drivers |
Distributed tracing | Distributed tracing OpenShift Container Platform 4.14 Configuring and using distributed tracing in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/distributed_tracing/index |
Chapter 16. InsightsOperator [operator.openshift.io/v1] | Chapter 16. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Insights. status object status is the most recently observed status of the Insights operator. 16.1.1. .spec Description spec is the specification of the desired behavior of the Insights. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 16.1.2. .status Description status is the most recently observed status of the Insights operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. gatherStatus object gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. insightsReport object insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 16.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 16.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 16.1.5. .status.gatherStatus Description gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. Type object Property Type Description gatherers array gatherers is a list of active gatherers (and their statuses) in the last gathering. gatherers[] object gathererStatus represents information about a particular data gatherer. lastGatherDuration string lastGatherDuration is the total time taken to process all gatherers during the last gather event. lastGatherTime string lastGatherTime is the last time when Insights data gathering finished. An empty value means that no data has been gathered yet. 16.1.6. .status.gatherStatus.gatherers Description gatherers is a list of active gatherers (and their statuses) in the last gathering. Type array 16.1.7. .status.gatherStatus.gatherers[] Description gathererStatus represents information about a particular data gatherer. Type object Required conditions lastGatherDuration name Property Type Description conditions array conditions provide details on the status of each gatherer. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastGatherDuration string lastGatherDuration represents the time spent gathering. name string name is the name of the gatherer. 16.1.8. .status.gatherStatus.gatherers[].conditions Description conditions provide details on the status of each gatherer. Type array 16.1.9. .status.gatherStatus.gatherers[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 16.1.10. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 16.1.11. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 16.1.12. .status.insightsReport Description insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. Type object Property Type Description downloadedAt string downloadedAt is the time when the last Insights report was downloaded. An empty value means that there has not been any Insights report downloaded yet and it usually appears in disconnected clusters (or clusters when the Insights data gathering is disabled). healthChecks array healthChecks provides basic information about active Insights health checks in a cluster. healthChecks[] object healthCheck represents an Insights health check attributes. 16.1.13. .status.insightsReport.healthChecks Description healthChecks provides basic information about active Insights health checks in a cluster. Type array 16.1.14. .status.insightsReport.healthChecks[] Description healthCheck represents an Insights health check attributes. Type object Required advisorURI description state totalRisk Property Type Description advisorURI string advisorURI provides the URL link to the Insights Advisor. description string description provides basic description of the healtcheck. state string state determines what the current state of the health check is. Health check is enabled by default and can be disabled by the user in the Insights advisor user interface. totalRisk integer totalRisk of the healthcheck. Indicator of the total risk posed by the detected issue; combination of impact and likelihood. The values can be from 1 to 4, and the higher the number, the more important the issue. 16.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/insightsoperators DELETE : delete collection of InsightsOperator GET : list objects of kind InsightsOperator POST : create an InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name} DELETE : delete an InsightsOperator GET : read the specified InsightsOperator PATCH : partially update the specified InsightsOperator PUT : replace the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/scale GET : read scale of the specified InsightsOperator PATCH : partially update scale of the specified InsightsOperator PUT : replace scale of the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/status GET : read status of the specified InsightsOperator PATCH : partially update status of the specified InsightsOperator PUT : replace status of the specified InsightsOperator 16.2.1. /apis/operator.openshift.io/v1/insightsoperators HTTP method DELETE Description delete collection of InsightsOperator Table 16.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InsightsOperator Table 16.2. HTTP responses HTTP code Reponse body 200 - OK InsightsOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create an InsightsOperator Table 16.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.4. Body parameters Parameter Type Description body InsightsOperator schema Table 16.5. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 202 - Accepted InsightsOperator schema 401 - Unauthorized Empty 16.2.2. /apis/operator.openshift.io/v1/insightsoperators/{name} Table 16.6. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method DELETE Description delete an InsightsOperator Table 16.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 16.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InsightsOperator Table 16.9. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InsightsOperator Table 16.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.11. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InsightsOperator Table 16.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.13. Body parameters Parameter Type Description body InsightsOperator schema Table 16.14. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty 16.2.3. /apis/operator.openshift.io/v1/insightsoperators/{name}/scale Table 16.15. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method GET Description read scale of the specified InsightsOperator Table 16.16. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified InsightsOperator Table 16.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified InsightsOperator Table 16.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.20. Body parameters Parameter Type Description body Scale schema Table 16.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 16.2.4. /apis/operator.openshift.io/v1/insightsoperators/{name}/status Table 16.22. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method GET Description read status of the specified InsightsOperator Table 16.23. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InsightsOperator Table 16.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.25. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InsightsOperator Table 16.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.27. Body parameters Parameter Type Description body InsightsOperator schema Table 16.28. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/insightsoperator-operator-openshift-io-v1 |
Chapter 5. Installing AMQ Interconnect | Chapter 5. Installing AMQ Interconnect You can deploy AMQ Interconnect as a single standalone router, or as multiple routers connected together in a router network. Router networks may represent any arbitrary topology, enabling you to design the network to best fit your requirements. With AMQ Interconnect, the router network topology is independent from the message routing. This means that messaging clients always experience the same message routing behavior regardless of the underlying network topology. Even in a multi-site or hybrid cloud router network, the connected endpoints behave as if they were connected to a single, logical router. To create the router network topology, complete the following: Review the deployment guidelines . You should understand the different router operating modes you can deploy in your topology, and be aware of security requirements for the interior portion of the router network. Install AMQ Interconnect on the host . If you are creating a router network with multiple routers, repeat this step on each host. Prepare the router configurations . After installing AMQ Interconnect, configure it to define how it should connect to other routers and endpoints, and how it should operate. Start the routers . After the routers are configured, start them so that they can connect to each other and begin routing messages. 5.1. Installing AMQ Interconnect on Red Hat Enterprise Linux AMQ Interconnect is distributed as a set of RPM packages, which are available through your Red Hat subscription. Procedure Ensure your subscription has been activated and your system is registered. For more information about using the Customer Portal to activate your Red Hat subscription and register your system for packages, see Appendix A, Using your subscription . Subscribe to the required repositories: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Use the yum or dnf command to install the qpid-dispatch-router , qpid-dispatch-tools , and qpid-dispatch-console packages and their dependencies: Use the which command to verify that the qdrouterd executable is present. The qdrouterd executable should be located at /usr/sbin/qdrouterd . 5.2. Preparing router configurations After installing AMQ Interconnect, configure it to define how it should connect to other routers and endpoints, and how it should operate. If you are creating a router network, complete this workflow for each router in the network. Prerequisites AMQ Interconnect is installed on the host. Procedure Configure essential router properties . To participate in a router network, a router must be configured with a unique ID and an operating mode. Configure network connections . Connect the router to any other routers in the router network. Repeat this step for each additional router to which you want to connect this router. If the router should connect with an AMQP client, configure a client connection. If the router should connect to an external AMQP container (such as a message broker), configure the connection. Secure each of the connections that you configured in the step . (Optional) Configure any additional properties. These properties should be configured the same way on each router. Therefore, you should only configure each one once, and then copy the configuration to each additional router in the router network. Authorization If necessary, configure policies to control which messaging resources clients are able to access on the router network. Routing AMQ Interconnect automatically routes messages without any configuration: clients can send messages to the router network, and the router automatically routes them to their destinations. However, you can configure the routing to meet your exact requirements. You can configure the routing patterns to be used for certain addresses, create waypoints and autolinks to route messages through broker queues, and create link routes to connect clients to brokers. Logging You can set the default logging configuration to ensure that events are logged at the correct level for your environment. Repeat this workflow for each additional router that you want to add to the router network. 5.3. Starting a router You use the qdrouterd command to start a router. You can start a router in the foreground, the background, or as a service. Procedure Do one of the following: To... Enter this command... Start the router in the foreground USD qdrouterd Start the router in the background as a daemon USD qdrouterd -d Start the router as a service Red Hat Enterprise Linux 6 USD sudo service qdrouterd start Red Hat Enterprise Linux 7 and later versions USD systemctl start qdrouterd.service Note If you start the router as a service, the systemd LimitNOFILE limit affects the number of connections that can be open for the router. If you reach the limit, the router is not able to accept any more connections, and an error message is logged indicating "Too many open files". To avoid reaching this limit, increase the LimitNOFILE value for the systemd process. For more information, see How to set limits for services in RHEL 7 and systemd . | [
"sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-6-server-rpms --enable=amq-clients-2-for-rhel-6-server-rpms",
"sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-7-server-rpms --enable=amq-clients-2-for-rhel-7-server-rpms",
"sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-8-x86_64-rpms --enable=amq-clients-2-for-rhel-8-x86_64-rpms",
"sudo yum install qpid-dispatch-router qpid-dispatch-tools qpid-dispatch-console",
"which qdrouterd /usr/sbin/qdrouterd",
"qdrouterd",
"qdrouterd -d",
"sudo service qdrouterd start",
"systemctl start qdrouterd.service"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/installing-router-router-rhel |
14.7. Samba with CUPS Printing Support | 14.7. Samba with CUPS Printing Support Samba allows client machines to share printers connected to the Samba server, as well as send Linux documents to Windows printer shares. Although there are other printing systems that function with Red Hat Enterprise Linux, CUPS (Common UNIX Print System) is the recommended printing system due to its close integration with Samba. 14.7.1. Simple smb.conf Settings The following example shows a very basic smb.conf configuration for CUPS support: More complicated printing configurations are possible. To add additional security and privacy for printing confidential documents, users can have their own print spooler not located in a public path. If a job fails, other users would not have access to the file. The printUSD share contains printer drivers for clients to access if not available locally. The printUSD share is optional and may not be required depending on the organization. Setting browseable to Yes enables the printer to be viewed in the Windows Network Neighborhood, provided the Samba server is set up correctly in the domain/workgroup. | [
"[global] load printers = Yes printing = cups printcap name = cups [printers] comment = All Printers path = /var/spool/samba/print printer = IBMInfoP browseable = No public = Yes guest ok = Yes writable = No printable = Yes printer admin = @ntadmins [printUSD] comment = Printer Drivers Share path = /var/lib/samba/drivers write list = ed, john printer admin = ed, john"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-CUPS |
Red Hat Quay Release Notes | Red Hat Quay Release Notes Red Hat Quay 3.9 Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_release_notes/index |
8.11. Using PMU to Monitor Guest Virtual Machine Performance | 8.11. Using PMU to Monitor Guest Virtual Machine Performance In Red Hat Enterprise Linux 6.4, vPMU (virtual PMU) was introduced as a Technology Preview. vPMU is based on Intel's PMU (Performance Monitoring Units) and may only be used on Intel machines. PMU allows the tracking of statistics which indicate how a guest virtual machine is functioning. Using performance monitoring, allows developers to use the CPU's PMU counter while using the performance tool for profiling. The virtual performance monitoring unit feature allows virtual machine users to identify sources of possible performance problems in their guest virtual machines, thereby improving the ability to profile a KVM guest virtual machine. To enable the feature, the -cpu host flag must be set. This feature is only supported with guest virtual machines running Red Hat Enterprise Linux 6 and is disabled by default. This feature only works using the Linux perf tool. Make sure the perf package is installed using the command: See the man page on perf for more information on the perf commands. | [
"yum install perf ."
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-perf-mon |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.