title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
4.11. audit
4.11. audit 4.11.1. RHBA-2011:1739 - audit bug fix and enhancement update Updated audit packages that fix various bugs and add several enhancements are now available for Red Hat Enterprise Linux 6. The audit packages contain the user space utilities for storing and searching the audit records which have been generated by the audit subsystem in the Linux 2.6 kernel. The audit package has been upgraded to upstream version 2.1.3, which provides a number of bug fixes and enhancements over the version. (BZ# 731723 ) Bug Fixes BZ# 715279 Previously, the audit daemon was logging messages even when configured to ignore "disk full" and "disk error" actions. With this update, audit now does nothing if it is set to ignore these actions, and no messages are logged in the described scenario. BZ# 715315 Previously, the Audit remote logging client received a "disk error" event instead of "disk full" event from a server when the server's disk space ran out. This bug has been fixed and the logging client now returns the correct event in the described scenario. BZ# 748124 Prior to this update, the audit system was identifying the accept4() system call as the now deprecated paccept() system call. Now, the code has been fixed and audit uses the correct identifier for the accept4() system call. BZ# 709345 Previously, the "auditctl -l" command returned 0 even if it failed because of dropped capabilities. This bug has been fixed and a non-zero value is now returned if the operation is not permitted. BZ# 728475 When Kerberos support was disabled, some configuration options in the audisp-remote.conf file related to Kerberos 5 generated warning messages about GSSAPI support during boot. With this update, the options are now commented out in the described scenario and the messages are no longer returned. BZ# 700005 On i386 and IBM System z architectures, the "autrace -r /bin/ls" command returned error messages even though all relevant rules were added correctly. This bug has been fixed and no error messages about sending add rule data requests are now returned in the described scenario. All audit users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/audit
4.373. Red Hat Enterprise Linux 6.2 Extended Update Support 6-Month Notice
4.373. Red Hat Enterprise Linux 6.2 Extended Update Support 6-Month Notice 4.373.1. RHSA-2013:1001 - Low: Red Hat Enterprise Linux 6.2 Extended Update Support 6-Month Notice This is the 6-Month notification for the retirement of Red Hat Enterprise Linux 6.2 Extended Update Support (EUS). In accordance with the Red Hat Enterprise Linux Errata Support Policy, Extended Update Support for Red Hat Enterprise Linux 6.2 will be retired on December 31, 2013, and support will no longer be provided. Accordingly, Red Hat will no longer provide updated packages, including critical impact security patches or urgent priority bug fixes, for Red Hat Enterprise Linux 6.2 EUS after that date. In addition, after December 31, 2013, technical support through Red Hat's Global Support Services will no longer be provided. Note: This notification applies only to those customers subscribed to the Extended Update Support (EUS) channel for Red Hat Enterprise Linux 6.2. We encourage customers to plan their migration from Red Hat Enterprise Linux 6.2 to a more recent version of Red Hat Enterprise Linux 6. As a benefit of the Red Hat subscription model, customers can use their active subscriptions to entitle any system on a currently supported Red Hat Enterprise Linux 6 release (6.3, or 6.4, for which EUS is available). Details of the Red Hat Enterprise Linux life cycle can be found here: https://access.redhat.com/support/policy/updates/errata/
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/red-hat-enterprise-linux-6.2-eus-notice
Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1]
Chapter 4. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. status object ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. 4.1.1. .spec Description ControlPlaneMachineSet represents the configuration of the ControlPlaneMachineSet. Type object Required replicas selector template Property Type Description replicas integer Replicas defines how many Control Plane Machines should be created by this ControlPlaneMachineSet. This field is immutable and cannot be changed after cluster installation. The ControlPlaneMachineSet only operates with 3 or 5 node control planes, 3 and 5 are the only valid values for this field. selector object Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. state string State defines whether the ControlPlaneMachineSet is Active or Inactive. When Inactive, the ControlPlaneMachineSet will not take any action on the state of the Machines within the cluster. When Active, the ControlPlaneMachineSet will reconcile the Machines and will update the Machines as necessary. Once Active, a ControlPlaneMachineSet cannot be made Inactive. To prevent further action please remove the ControlPlaneMachineSet. strategy object Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. template object Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. 4.1.2. .spec.selector Description Label selector for Machines. Existing Machines selected by this selector will be the ones affected by this ControlPlaneMachineSet. It must match the template's labels. This field is considered immutable after creation of the resource. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.5. .spec.strategy Description Strategy defines how the ControlPlaneMachineSet will update Machines when it detects a change to the ProviderSpec. Type object Property Type Description type string Type defines the type of update strategy that should be used when updating Machines owned by the ControlPlaneMachineSet. Valid values are "RollingUpdate" and "OnDelete". The current default value is "RollingUpdate". 4.1.6. .spec.template Description Template describes the Control Plane Machines that will be created by this ControlPlaneMachineSet. Type object Required machineType Property Type Description machineType string MachineType determines the type of Machines that should be managed by the ControlPlaneMachineSet. Currently, the only valid value is machines_v1beta1_machine_openshift_io. machines_v1beta1_machine_openshift_io object OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. 4.1.7. .spec.template.machines_v1beta1_machine_openshift_io Description OpenShiftMachineV1Beta1Machine defines the template for creating Machines from the v1beta1.machine.openshift.io API group. Type object Required metadata spec Property Type Description failureDomains object FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. metadata object ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. spec object Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. 4.1.8. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains Description FailureDomains is the list of failure domains (sometimes called availability zones) in which the ControlPlaneMachineSet should balance the Control Plane Machines. This will be merged into the ProviderSpec given in the template. This field is optional on platforms that do not require placement information. Type object Required platform Property Type Description aws array AWS configures failure domain information for the AWS platform. aws[] object AWSFailureDomain configures failure domain information for the AWS platform. azure array Azure configures failure domain information for the Azure platform. azure[] object AzureFailureDomain configures failure domain information for the Azure platform. gcp array GCP configures failure domain information for the GCP platform. gcp[] object GCPFailureDomain configures failure domain information for the GCP platform nutanix array nutanix configures failure domain information for the Nutanix platform. nutanix[] object NutanixFailureDomainReference refers to the failure domain of the Nutanix platform. openstack array OpenStack configures failure domain information for the OpenStack platform. openstack[] object OpenStackFailureDomain configures failure domain information for the OpenStack platform. platform string Platform identifies the platform for which the FailureDomain represents. Currently supported values are AWS, Azure, GCP, OpenStack, VSphere and Nutanix. 4.1.9. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws Description AWS configures failure domain information for the AWS platform. Type array 4.1.10. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[] Description AWSFailureDomain configures failure domain information for the AWS platform. Type object Property Type Description placement object Placement configures the placement information for this instance. subnet object Subnet is a reference to the subnet to use for this instance. 4.1.11. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].placement Description Placement configures the placement information for this instance. Type object Required availabilityZone Property Type Description availabilityZone string AvailabilityZone is the availability zone of the instance. 4.1.12. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet Description Subnet is a reference to the subnet to use for this instance. Type object Required type Property Type Description arn string ARN of resource. filters array Filters is a set of filters used to identify a resource. filters[] object AWSResourceFilter is a filter used to identify an AWS resource id string ID of resource. type string Type determines how the reference will fetch the AWS resource. 4.1.13. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters Description Filters is a set of filters used to identify a resource. Type array 4.1.14. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws[].subnet.filters[] Description AWSResourceFilter is a filter used to identify an AWS resource Type object Required name Property Type Description name string Name of the filter. Filter names are case-sensitive. values array (string) Values includes one or more filter values. Filter values are case-sensitive. 4.1.15. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure Description Azure configures failure domain information for the Azure platform. Type array 4.1.16. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.azure[] Description AzureFailureDomain configures failure domain information for the Azure platform. Type object Required zone Property Type Description subnet string subnet is the name of the network subnet in which the VM will be created. When omitted, the subnet value from the machine providerSpec template will be used. zone string Availability Zone for the virtual machine. If nil, the virtual machine should be deployed to no zone. 4.1.17. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp Description GCP configures failure domain information for the GCP platform. Type array 4.1.18. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp[] Description GCPFailureDomain configures failure domain information for the GCP platform Type object Required zone Property Type Description zone string Zone is the zone in which the GCP machine provider will create the VM. 4.1.19. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.nutanix Description nutanix configures failure domain information for the Nutanix platform. Type array 4.1.20. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.nutanix[] Description NutanixFailureDomainReference refers to the failure domain of the Nutanix platform. Type object Required name Property Type Description name string name of the failure domain in which the nutanix machine provider will create the VM. Failure domains are defined in a cluster's config.openshift.io/Infrastructure resource. 4.1.21. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.openstack Description OpenStack configures failure domain information for the OpenStack platform. Type array 4.1.22. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.openstack[] Description OpenStackFailureDomain configures failure domain information for the OpenStack platform. Type object Property Type Description availabilityZone string availabilityZone is the nova availability zone in which the OpenStack machine provider will create the VM. If not specified, the VM will be created in the default availability zone specified in the nova configuration. Availability zone names must NOT contain : since it is used by admin users to specify hosts where instances are launched in server creation. Also, it must not contain spaces otherwise it will lead to node that belongs to this availability zone register failure, see kubernetes/cloud-provider-openstack#1379 for further information. The maximum length of availability zone name is 63 as per labels limits. rootVolume object rootVolume contains settings that will be used by the OpenStack machine provider to create the root volume attached to the VM. If not specified, no root volume will be created. 4.1.23. .spec.template.machines_v1beta1_machine_openshift_io.failureDomains.openstack[].rootVolume Description rootVolume contains settings that will be used by the OpenStack machine provider to create the root volume attached to the VM. If not specified, no root volume will be created. Type object Required volumeType Property Type Description availabilityZone string availabilityZone specifies the Cinder availability zone where the root volume will be created. If not specifified, the root volume will be created in the availability zone specified by the volume type in the cinder configuration. If the volume type (configured in the OpenStack cluster) does not specify an availability zone, the root volume will be created in the default availability zone specified in the cinder configuration. See https://docs.openstack.org/cinder/latest/admin/availability-zone-type.html for more details. If the OpenStack cluster is deployed with the cross_az_attach configuration option set to false, the root volume will have to be in the same availability zone as the VM (defined by OpenStackFailureDomain.AvailabilityZone). Availability zone names must NOT contain spaces otherwise it will lead to volume that belongs to this availability zone register failure, see kubernetes/cloud-provider-openstack#1379 for further information. The maximum length of availability zone name is 63 as per labels limits. volumeType string volumeType specifies the type of the root volume that will be provisioned. The maximum length of a volume type name is 255 characters, as per the OpenStack limit. 4.1.24. .spec.template.machines_v1beta1_machine_openshift_io.metadata Description ObjectMeta is the standard object metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Labels are required to match the ControlPlaneMachineSet selector. Type object Required labels Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels . This field must contain both the 'machine.openshift.io/cluster-api-machine-role' and 'machine.openshift.io/cluster-api-machine-type' labels, both with a value of 'master'. It must also contain a label with the key 'machine.openshift.io/cluster-api-cluster'. 4.1.25. .spec.template.machines_v1beta1_machine_openshift_io.spec Description Spec contains the desired configuration of the Control Plane Machines. The ProviderSpec within contains platform specific details for creating the Control Plane Machines. The ProviderSe should be complete apart from the platform specific failure domain field. This will be overriden when the Machines are created based on the FailureDomains field. Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 4.1.26. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 4.1.27. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 4.1.28. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.29. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 4.1.30. .spec.template.machines_v1beta1_machine_openshift_io.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 4.1.31. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 4.1.32. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 4.1.33. .spec.template.machines_v1beta1_machine_openshift_io.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 4.1.34. .spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 4.1.35. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 4.1.36. .spec.template.machines_v1beta1_machine_openshift_io.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 4.1.37. .status Description ControlPlaneMachineSetStatus represents the status of the ControlPlaneMachineSet CRD. Type object Property Type Description conditions array Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } observedGeneration integer ObservedGeneration is the most recent generation observed for this ControlPlaneMachineSet. It corresponds to the ControlPlaneMachineSets's generation, which is updated on mutation by the API Server. readyReplicas integer ReadyReplicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller which are ready. Note that this value may be higher than the desired number of replicas while rolling updates are in-progress. replicas integer Replicas is the number of Control Plane Machines created by the ControlPlaneMachineSet controller. Note that during update operations this value may differ from the desired replica count. unavailableReplicas integer UnavailableReplicas is the number of Control Plane Machines that are still required before the ControlPlaneMachineSet reaches the desired available capacity. When this value is non-zero, the number of ReadyReplicas is less than the desired Replicas. updatedReplicas integer UpdatedReplicas is the number of non-terminated Control Plane Machines created by the ControlPlaneMachineSet controller that have the desired provider spec and are ready. This value is set to 0 when a change is detected to the desired spec. When the update strategy is RollingUpdate, this will also coincide with starting the process of updating the Machines. When the update strategy is OnDelete, this value will remain at 0 until a user deletes an existing replica and its replacement has become ready. 4.1.38. .status.conditions Description Conditions represents the observations of the ControlPlaneMachineSet's current state. Known .status.conditions.type are: Available, Degraded and Progressing. Type array 4.1.39. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 4.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1/controlplanemachinesets GET : list objects of kind ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets DELETE : delete collection of ControlPlaneMachineSet GET : list objects of kind ControlPlaneMachineSet POST : create a ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} DELETE : delete a ControlPlaneMachineSet GET : read the specified ControlPlaneMachineSet PATCH : partially update the specified ControlPlaneMachineSet PUT : replace the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale GET : read scale of the specified ControlPlaneMachineSet PATCH : partially update scale of the specified ControlPlaneMachineSet PUT : replace scale of the specified ControlPlaneMachineSet /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status GET : read status of the specified ControlPlaneMachineSet PATCH : partially update status of the specified ControlPlaneMachineSet PUT : replace status of the specified ControlPlaneMachineSet 4.2.1. /apis/machine.openshift.io/v1/controlplanemachinesets HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.1. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty 4.2.2. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets HTTP method DELETE Description delete collection of ControlPlaneMachineSet Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ControlPlaneMachineSet Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ControlPlaneMachineSet Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 202 - Accepted ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.3. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet HTTP method DELETE Description delete a ControlPlaneMachineSet Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControlPlaneMachineSet Table 4.10. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControlPlaneMachineSet Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControlPlaneMachineSet Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty 4.2.4. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/scale Table 4.16. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet HTTP method GET Description read scale of the specified ControlPlaneMachineSet Table 4.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ControlPlaneMachineSet Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ControlPlaneMachineSet Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body Scale schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 4.2.5. /apis/machine.openshift.io/v1/namespaces/{namespace}/controlplanemachinesets/{name}/status Table 4.23. Global path parameters Parameter Type Description name string name of the ControlPlaneMachineSet HTTP method GET Description read status of the specified ControlPlaneMachineSet Table 4.24. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ControlPlaneMachineSet Table 4.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.26. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ControlPlaneMachineSet Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body ControlPlaneMachineSet schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK ControlPlaneMachineSet schema 201 - Created ControlPlaneMachineSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_apis/controlplanemachineset-machine-openshift-io-v1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/making-open-source-more-inclusive
Chapter 4. Timestamp Functions
Chapter 4. Timestamp Functions Each timestamp function returns a value to indicate when a function is executed. These returned values can then be used to indicate when an event occurred, provide an ordering for events, or compute the amount of time elapsed between two time stamps.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/timestamp_stp
Chapter 8. Disaster Recovery
Chapter 8. Disaster Recovery Disaster recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. Red Hat OpenShift Data Foundation provides two options: Regional-DR with Red Hat Advanced Cluster Management (RHACM) Metro-DR (Stretched Cluster - Arbiter) Regional-DR with RHACM This Regional-DR solution provides an automated "one-click" recovery in the event of a regional disaster. The protected applications are automatically redeployed to a designated OpenShift Container Platform with OpenShift Data Foundation cluster that is available in another region. This release of Regional DR supports 2-way replication across two managed clusters located in two different regions or data centers. Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . Important This is a developer preview feature and is subject to developer preview support limitations. Developer preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on their availability and work schedules. Metro-DR (Stretched Cluster - Arbiter) In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises. Note This solution is designed to be deployed where latencies do not exceed 4 milliseconds round-trip time (RTT) between locations of the two zones residing in the main on-premise data centres. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . To use the Arbiter stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see: Configuring OpenShift Data Foundation for Metro-DR stretch cluster . Recovering a Metro-DR stretch cluster . Important Metro-DR stretch cluster is a technology preview features and is subject to technology preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/planning_your_deployment/disaster-recovery
C.19. StateTransferManager
C.19. StateTransferManager org.infinispan.statetransfer.StateTransferManager The StateTransferManager component handles state transfer in Red Hat JBoss Data Grid. Note The StateTransferManager component is only available in clustered mode. Table C.30. Attributes Name Description Type Writable joinComplete If true, the node has successfully joined the grid and is considered to hold state. If false, the join process is still in progress.. boolean No stateTransferInProgress Checks whether there is a pending inbound state transfer on this cluster member. boolean No Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/statetransfermanager
Chapter 54. Next steps
Chapter 54. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Decision Manager project
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/next_steps_4
Chapter 2. Migration Toolkit for Containers release notes
Chapter 2. Migration Toolkit for Containers release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.11 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.1. Migration Toolkit for Containers 1.8.2 release notes 2.1.1. Resolved issues This release has the following major resolved issues: Backup phase fails after setting custom CA replication repository In releases of Migration Toolkit for Containers (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurred during the backup phase. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution In releases of (MTC), versions before 4.1.3 of the tough-cookie package used in MTC were vulnerable to prototype pollution. This vulnerability occurred because CookieJar did not handle cookies properly when the value of the rejectPublicSuffixes was set to false . For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, were vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data was provided as a range. For more details, see (CVE-2022-25883) 2.1.2. Known issues There are no major known issues in this release. 2.2. Migration Toolkit for Containers 1.8.1 release notes 2.2.1. Resolved issues This release has the following major resolved issues: CVE-2023-39325: golang: net/http, x/net/http2: rapid stream resets can cause excessive work A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which is used by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (BZ#2245079) It is advised to update to MTC 1.8.1 or later, which resolve this issue. For more details, see (CVE-2023-39325) and (CVE-2023-44487) 2.2.2. Known issues There are no major known issues in this release. 2.3. Migration Toolkit for Containers 1.8 release notes 2.3.1. Resolved issues This release has the following resolved issues: Indirect migration is stuck on backup stage In releases, an indirect migration became stuck at the backup stage, due to InvalidImageName error. ( BZ#2233097 ) PodVolumeRestore remain In Progress keeping the migration stuck at Stage Restore In releases, on performing an indirect migration, the migration became stuck at the Stage Restore step, waiting for the podvolumerestore to be completed. ( BZ#2233868 ) Migrated application unable to pull image from internal registry on target cluster In releases, on migrating an application to the target cluster, the migrated application failed to pull the image from the internal image registry resulting in an application failure . ( BZ#2233103 ) Migration failing on Azure due to authorization issue In releases, on an Azure cluster, when backing up to Azure storage, the migration failed at the Backup stage. ( BZ#2238974 ) 2.3.2. Known issues This release has the following known issues: Old Restic pods are not getting removed on upgrading MTC 1.7.x 1.8.x In this release, on upgrading the MTC Operator from 1.7.x to 1.8.x, the old Restic pods are not being removed. Therefore after the upgrade, both Restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) Migrated builder pod fails to push to image registry In this release, on migrating an application including a BuildConfig from a source to target cluster, builder pod results in error , failing to push the image to the image registry. ( BZ#2234781 ) [UI] CA bundle file field is not properly cleared In this release, after enabling Require SSL verification and adding content to the CA bundle file for an MCG NooBaa bucket in MigStorage, the connection fails as expected. However, when reverting these changes by removing the CA bundle content and clearing Require SSL verification , the connection still fails. The issue is only resolved by deleting and re-adding the repository. ( BZ#2240052 ) Backup phase fails after setting custom CA replication repository In (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurs during the backup phase. This issue is resolved in MTC 1.8.2. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions before 4.1.3 of the tough-cookie package, used in MTC, are vulnerable to prototype pollution. This vulnerability occurs because CookieJar does not handle cookies properly when the value of the rejectPublicSuffixes is set to false . This issue is resolved in MTC 1.8.2. For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, are vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data is provided as a range. This issue is resolved in MTC 1.8.2. For more details, see (CVE-2022-25883) 2.3.3. Technical changes This release has the following technical changes: Migration from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy Migration Toolkit for Containers (MTC) Operator and MTC 1.7.x. Migration from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x might be used. However, but it must be the same MTC 1.Y.z on both source and destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported. MTC 1.8.x by default installs OADP 1.2.x. Upgrading from MTC 1.7.x to MTC 1.8.0, requires manually changing the OADP channel to 1.2. If this is not done, the upgrade of the Operator fails. 2.4. Migration Toolkit for Containers 1.7.14 release notes 2.4.1. Resolved issues This release has the following resolved issues: CVE-2023-39325 CVE-2023-44487: various flaws A flaw was found in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream then immediately send an RST_STREAM frame to cancel those requests. This activity created additional workloads for the server in terms of setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. (BZ#2243564) (BZ#2244013) (BZ#2244014) (BZ#2244015) (BZ#2244016) (BZ#2244017) To resolve this issue, upgrade to MTC 1.7.14. For more details, see (CVE-2023-44487) and (CVE-2023-39325) . CVE-2023-39318 CVE-2023-39319 CVE-2023-39321: various flaws (CVE-2023-39318) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not properly handle HTML-like "" comment tokens, or the hashbang "#!" comment tokens, in <script> contexts. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39319) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not apply the proper rules for handling occurrences of "<script" , "<!--" , and "</script" within JavaScript literals in <script> contexts. This could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39321) : A flaw was discovered in Golang, utilized by MTC. Processing an incomplete post-handshake message for a QUIC connection could cause a panic. (BZ#2238062) (BZ#2238088) (CVE-2023-3932) : A flaw was discovered in Golang, utilized by MTC. Connections using the QUIC transport protocol did not set an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. (BZ#2238088) To resolve these issues, upgrade to MTC 1.7.14. For more details, see (CVE-2023-39318) , (CVE-2023-39319) , and (CVE-2023-39321) . 2.4.2. Known issues There are no major known issues in this release. 2.5. Migration Toolkit for Containers 1.7.13 release notes 2.5.1. Resolved issues There are no major resolved issues in this release. 2.5.2. Known issues There are no major known issues in this release. 2.6. Migration Toolkit for Containers 1.7.12 release notes 2.6.1. Resolved issues There are no major resolved issues in this release. 2.6.2. Known issues This release has the following known issues: Error code 504 is displayed on the Migration details page On the Migration details page, at first, the migration details are displayed without any issues. However, after sometime, the details disappear, and a 504 error is returned. ( BZ#2231106 ) Old restic pods are not removed when upgrading MTC 1.7.x to MTC 1.8 On upgrading the MTC operator from 1.7.x to 1.8.x, the old restic pods are not removed. After the upgrade, both restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) 2.7. Migration Toolkit for Containers 1.7.11 release notes 2.7.1. Resolved issues There are no major resolved issues in this release. 2.7.2. Known issues There are no known issues in this release. 2.8. Migration Toolkit for Containers 1.7.10 release notes 2.8.1. Resolved issues This release has the following major resolved issue: Adjust rsync options in DVM In this release, you can prevent absolute symlinks from being manipulated by Rsync in the course of direct volume migration (DVM). Running DVM in privileged mode preserves absolute symlinks inside the persistent volume claims (PVCs). To switch to privileged mode, in the MigrationController CR, set the migration_rsync_privileged spec to true . ( BZ#2204461 ) 2.8.2. Known issues There are no known issues in this release. 2.9. Migration Toolkit for Containers 1.7.9 release notes 2.9.1. Resolved issues There are no major resolved issues in this release. 2.9.2. Known issues This release has the following known issue: Adjust rsync options in DVM In this release, users are unable to prevent absolute symlinks from being manipulated by rsync during direct volume migration (DVM). ( BZ#2204461 ) 2.10. Migration Toolkit for Containers 1.7.8 release notes 2.10.1. Resolved issues This release has the following major resolved issues: Velero image cannot be overridden in the MTC operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) Adding a MigCluster from the UI fails when the domain name has more than six characters In releases, adding a MigCluster from the UI failed when the domain name had more than six characters. The UI code expected a domain name of between two and six characters. ( BZ#2152149 ) UI fails to render the Migrations' page: Cannot read properties of undefined (reading 'name') In releases, the UI failed to render the Migrations' page, returning Cannot read properties of undefined (reading 'name') . ( BZ#2163485 ) Creating DPA resource fails on Red Hat OpenShift Container Platform 4.6 clusters In releases, when deploying MTC on an {OCP} 4.6 cluster, the DPA failed to be created according to the logs, which resulted in some pods missing. From the logs in the migration-controller in the OCP 4.6 cluster, it indicated that an unexpected null value was passed, which caused the error. ( BZ#2173742 ) 2.10.2. Known issues There are no known issues in this release. 2.11. Migration Toolkit for Containers 1.7.7 release notes 2.11.1. Resolved issues There are no major resolved issues in this release. 2.11.2. Known issues There are no known issues in this release. 2.12. Migration Toolkit for Containers 1.7.6 release notes 2.12.1. New features Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12 With the incoming enforcement of Pod Security Admission (PSA) in {OCP} 4.12 the default pod would run with a restricted profile. This restricted profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The following enhancement outlines the changes that will be required to remain compatible with OCP 4.12. ( MIG-1240 ) 2.12.2. Resolved issues This release has the following major resolved issue: Unable to create Storage Class Conversion plan due to missing cronjob error in Red Hat OpenShift Platform 4.12 In releases, on the persistent volumes page, an error is thrown that a CronJob is not available in version batch/v1beta1 , and when clicking on cancel, the migplan is created with status Not ready . ( BZ#2143628 ) 2.12.3. Known issues This release has the following known issue: Conflict conditions are cleared briefly after they are created When creating a new state migration plan that will result in a conflict error, that error is cleared shorty after it is displayed. ( BZ#2144299 ) 2.13. Migration Toolkit for Containers 1.7.5 release notes 2.13.1. Resolved issues This release has the following major resolved issue: Direct Volume Migration is failing as rsync pod on source cluster move into Error state In release, migration succeeded with warnings but Direct Volume Migration failed with rsync pod on source namespace going into error state. ( *BZ#2132978 ) 2.13.2. Known issues This release has the following known issues: Velero image cannot be overridden in the MTC operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) When editing a MigHook in the UI, the page might fail to reload The UI might fail to reload when editing a hook if there is a network connection issue. After the network connection is restored, the page will fail to reload until the cache is cleared. ( BZ#2140208 ) 2.14. Migration Toolkit for Containers 1.7.4 release notes 2.14.1. Resolved issues There are no major resolved issues in this release. 2.14.2. Known issues Rollback missing out deletion of some resources from the target cluster On performing the roll back of an application from the MTC UI, some resources are not being deleted from the target cluster and the roll back is showing a status as successfully completed. ( BZ#2126880 ) 2.15. Migration Toolkit for Containers 1.7.3 release notes 2.15.1. Resolved issues This release has the following major resolved issues: Correct DNS validation for destination namespace In releases, the MigPlan could not be validated if the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) Deselecting all PVCs from UI still results in an attempted PVC transfer In releases, while doing a full migration, unselecting the persistent volume claims (PVCs) would not skip selecting the PVCs and still try to migrate them. ( BZ#2106073 ) Incorrect DNS validation for destination namespace In releases, MigPlan could not be validated because the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) 2.15.2. Known issues There are no known issues in this release. 2.16. Migration Toolkit for Containers 1.7.2 release notes 2.16.1. Resolved issues This release has the following major resolved issues: MTC UI does not display logs correctly In releases, the MTC UI did not display logs correctly. ( BZ#2062266 ) StorageClass conversion plan adding migstorage reference in migplan In releases, StorageClass conversion plans had a migstorage reference even though it was not being used. ( BZ#2078459 ) Velero pod log missing from downloaded logs In releases, when downloading a compressed (.zip) folder for all logs, the velero pod was missing. ( BZ#2076599 ) Velero pod log missing from UI drop down In releases, after a migration was performed, the velero pod log was not included in the logs provided in the dropdown list. ( BZ#2076593 ) Rsync options logs not visible in log-reader pod In releases, when trying to set any valid or invalid rsync options in the migrationcontroller , the log-reader was not showing any logs regarding the invalid options or about the rsync command being used. ( BZ#2079252 ) Default CPU requests on Velero/Restic are too demanding and fail in certain environments In releases, the default CPU requests on Velero/Restic were too demanding and fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values were high. ( BZ#2088022 ) 2.16.2. Known issues This release has the following known issues: Updating the replication repository to a different storage provider type is not respected by the UI After updating the replication repository to a different type and clicking Update Repository , it shows connection successful, but the UI is not updated with the correct details. When clicking on the Edit button again, it still shows the old replication repository information. Furthermore, when trying to update the replication repository again, it still shows the old replication details. When selecting the new repository, it also shows all the information you entered previously and the Update repository is not enabled, as if there are no changes to be submitted. ( BZ#2102020 ) Migrations fails because the backup is not found Migration fails at the restore stage because of initial backup has not been found. ( BZ#2104874 ) Update Cluster button is not enabled when updating Azure resource group When updating the remote cluster, selecting the Azure resource group checkbox, and adding a resource group does not enable the Update cluster option. ( BZ#2098594 ) Error pop-up in UI on deleting migstorage resource When creating a backupStorage credential secret in {OCP}, if the migstorage is removed from the UI, a 404 error is returned and the underlying secret is not removed. ( BZ#2100828 ) Miganalytic resource displaying resource count as 0 in UI After creating a migplan from backend, the Miganalytic resource displays the resource count as 0 in UI. ( BZ#2102139 ) Registry validation fails when two trailing slashes are added to the Exposed route host to image registry After adding two trailing slashes, meaning // , to the exposed registry route, the MigCluster resource is showing the status as connected . When creating a migplan from backend with DIM, the plans move to the unready status. ( BZ#2104864 ) Service Account Token not visible while editing source cluster When editing the source cluster that has been added and is in Connected state, in the UI, the service account token is not visible in the field. To save the wizard, you have to fetch the token again and provide details inside the field. ( BZ#2097668 ) 2.17. Migration Toolkit for Containers 1.7.1 release notes 2.17.1. Resolved issues There are no major resolved issues in this release. 2.17.2. Known issues This release has the following known issues: Incorrect DNS validation for destination namespace MigPlan cannot be validated because the destination namespace starts with a non-alphabetic character. ( BZ#2102231 ) Cloud propagation phase in migration controller is not functioning due to missing labels on Velero pods The Cloud propagation phase in the migration controller is not functioning due to missing labels on Velero pods. The EnsureCloudSecretPropagated phase in the migration controller waits until replication repository secrets are propagated on both sides. As this label is missing on Velero pods, the phase is not functioning as expected. ( BZ#2088026 ) Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values are high. The resources can be configured in DPA using the podConfig field for Velero and Restic. Migration operator should set CPU requests to a lower value, such as 100m, so that Velero and Restic pods can be scheduled in resource constrained environments MTC often operates in. ( BZ#2088022 ) Warning is displayed on persistentVolumes page after editing storage class conversion plan A warning is displayed on the persistentVolumes page after editing the storage class conversion plan. When editing the existing migration plan, a warning is displayed on the UI At least one PVC must be selected for Storage Class Conversion . ( BZ#2079549 ) Velero pod log missing from downloaded logs When downloading a compressed (.zip) folder for all logs, the velero pod is missing. ( BZ#2076599 ) Velero pod log missing from UI drop down After a migration is performed, the velero pod log is not included in the logs provided in the dropdown list. ( BZ#2076593 ) 2.18. Migration Toolkit for Containers 1.7 release notes 2.18.1. New features and enhancements This release has the following new features and enhancements: The Migration Toolkit for Containers (MTC) Operator now depends upon the OpenShift API for Data Protection (OADP) Operator. When you install the MTC Operator, the Operator Lifecycle Manager (OLM) automatically installs the OADP Operator in the same namespace. You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters by using the crane tunnel-api command. Converting storage classes in the MTC web console: You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. 2.18.2. Known issues This release has the following known issues: MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Direct and indirect data transfers do not work if the destination storage is a PV that is dynamically provisioned by the AWS Elastic File System (EFS). This is due to limitations of the AWS EFS Container Storage Interface (CSI) driver. ( BZ#2085097 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.19. Migration Toolkit for Containers 1.6 release notes 2.19.1. New features and enhancements This release has the following new features and enhancements: State migration: You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). "New operator version available" notification: The Clusters page of the MTC web console displays a notification when a new Migration Toolkit for Containers Operator is available. 2.19.2. Deprecated features The following features are deprecated: MTC version 1.4 is no longer supported. 2.19.3. Known issues This release has the following known issues: On OpenShift Container Platform 3.10, the MigrationController pod takes too long to restart. The Bugzilla report contains a workaround. ( BZ#1986796 ) Stage pods fail during direct volume migration from a classic OpenShift Container Platform source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. ( BZ#1887526 ) MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.20. Migration Toolkit for Containers 1.5 release notes 2.20.1. New features and enhancements This release has the following new features and enhancements: The Migration resource tree on the Migration details page of the web console has been enhanced with additional resources, Kubernetes events, and live status information for monitoring and debugging migrations. The web console can support hundreds of migration plans. A source namespace can be mapped to a different target namespace in a migration plan. Previously, the source namespace was mapped to a target namespace with the same name. Hook phases with status information are displayed in the web console during a migration. The number of Rsync retry attempts is displayed in the web console during direct volume migration. Persistent volume (PV) resizing can be enabled for direct volume migration to ensure that the target cluster does not run out of disk space. The threshold that triggers PV resizing is configurable. Previously, PV resizing occurred when the disk usage exceeded 97%. Velero has been updated to version 1.6, which provides numerous fixes and enhancements. Cached Kubernetes clients can be enabled to provide improved performance. 2.20.2. Deprecated features The following features are deprecated: MTC versions 1.2 and 1.3 are no longer supported. The procedure for updating deprecated APIs has been removed from the troubleshooting section of the documentation because the oc convert command is deprecated. 2.20.3. Known issues This release has the following known issues: Microsoft Azure storage is unavailable if you create more than 400 migration plans. The MigStorage custom resource displays the following message: The request is being throttled as the limit has been reached for operation type . ( BZ#1977226 ) If a migration fails, the migration plan does not retain custom persistent volume (PV) settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) PV resizing does not work as expected for AWS gp2 storage unless the pv_resizing_threshold is 42% or greater. ( BZ#1973148 ) PV resizing does not work with OpenShift Container Platform 3.7 and 3.9 source clusters in the following scenarios: The application was installed after MTC was installed. An application pod was rescheduled on a different node after MTC was installed. OpenShift Container Platform 3.7 and 3.9 do not support the Mount Propagation feature that enables Velero to mount PVs automatically in the Restic pod. The MigAnalytic custom resource (CR) fails to collect PV data from the Restic pod and reports the resources as 0 . The MigPlan CR displays a status similar to the following: Example output status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: "True" type: ExtendedPVAnalysisFailed To enable PV resizing, you can manually restart the Restic daemonset on the source cluster or restart the Restic pods on the same nodes as the application. If you do not restart Restic, you can run the direct volume migration without PV resizing. ( BZ#1982729 ) 2.20.4. Technical changes This release has the following technical changes: The legacy Migration Toolkit for Containers Operator version 1.5.1 is installed manually on OpenShift Container Platform versions 3.7 to 4.5. The Migration Toolkit for Containers Operator version 1.5.1 is installed on OpenShift Container Platform versions 4.6 and later by using the Operator Lifecycle Manager.
[ "status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/mtc-release-notes
Chapter 1. Node APIs
Chapter 1. Node APIs 1.1. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 1.2. PerformanceProfile [performance.openshift.io/v2] Description PerformanceProfile is the Schema for the performanceprofiles API Type object 1.3. Profile [tuned.openshift.io/v1] Description Profile is a specification for a Profile resource. Type object 1.4. RuntimeClass [node.k8s.io/v1] Description RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/ Type object 1.5. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/node_apis/node-apis
Chapter 7. Configuring your JBoss EAP server and application
Chapter 7. Configuring your JBoss EAP server and application The JBoss EAP for OpenShift image is preconfigured for basic use with your Java applications. However, you can configure the JBoss EAP instance inside the image. The recommended method is to use the OpenShift S2I process and set environment variables in Helm charts to tune the JVM. Important Any configuration changes made on a running container will be lost when the container is restarted or terminated. This includes any configuration changes made using scripts that are included with a traditional JBoss EAP installation, for example add-user.sh or the management CLI. It is strongly recommended that you use the OpenShift S2I process, together with environment variables, to make any configuration changes to the JBoss EAP instance inside the JBoss EAP for OpenShift image. 7.1. JVM default memory settings You can use the following environment variables to modify the JVM settings calculated automatically. Note that these variables are only used when default memory size is calculated automatically when a valid container memory limit is defined. Environment variables Description JAVA_INITIAL_MEM_RATIO This environment variable is now deprecated. Corresponds to the JVM argument -XX:InitialRAMPercentage . This is not specified by default and will be removed in a future release. You need to specify --XX:InitialRAMPercentage directly in JAVA_OPTS instead. Note You no longer need to set JAVA_INITIAL_MEM_RATIO=0 to disable automatic computation. Because no default value is provided for this environment variable. JAVA_MAX_MEM_RATIO Environment variable to configure the -XX:MaxRAMPercentage JVM option. Set the maximum heap size as a percentage of the total memory available for the Java VM. The default value is 80%. Setting JAVA_MAX_MEM_RATIO=0 disables this default value. JAVA_OPTS Environment variable to provide additional options to the JVM, for example, JAVA_OPTS=-Xms512m -Xmx1024m Note If you set a value for -Xms , the -XX:InitialRAMPercentage option is ignored. If you set a value for -Xmx , the -XX:MaxRAMPercentage option is ignored. JAVA_MAX_INITIAL_MEM This environment variable is now deprecated. Use JAVA_OPTS to provide the`-Xms` option, for example, JAVA_OPTS=-Xms256m 7.2. JVM garbage collection settings The EAP image for OpenShift includes settings for both garbage collection and garbage collection logging Garbage Collection Settings -XX:+UseParallelGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError Garbage Collection Logging Settings -Xlog:gc*:file=/opt/server/standalone/log/gc.log:time,uptimemillis:filecount=5,filesize=3M 7.3. JVM environment variables Use these environment variables to configure the JVM in the EAP for OpenShift image. Table 7.1. JVM Environment Variables Variable Name Example Default Value JVM Settings Description JAVA_OPTS -verbose:class No default Multiple JVM options to pass to the java command. Use JAVA_OPTS_APPEND to configure additional JVM settings. If you use JAVA_OPTS , some unconfigurable defaults are not added to the server JVM settings. You must explicitly add these settings. Using JAVA_OPTS disables certain settings added by default by the container scripts. Disabled settings include: -XX:MetaspaceSize=96M -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=jdk.nashorn.api,com.sun.crypto.provider -Djava.awt.headless=true Add these defaults if you use JAVA_OPTS to configure additional settings. JAVA_OPTS_APPEND -Dsome.property=value No default Multiple User-specified Java options to append to generated options in JAVA_OPTS . JAVA_MAX_MEM_RATIO 50 80 -Xmx Use this variable when the -Xmx option is not specified in JAVA_OPTS . The value of this variable is used to calculate the default maximum heap memory size based on the restrictions of the container. If this variable is used in a container without a memory constraint, the variable has no effect. If this variable is used in a container that does have a memory constraint, the value of -Xmx is set to the specified ratio of the container's available memory. The default value, 50 means that 50% of the available memory is used as an upper boundary. To skip calculation of maximum memory, set the value of this variable to 0 . No -Xmx option will be added to JAVA_OPTS . JAVA_INITIAL_MEM_RATIO 25 -Xms -Xms Use this variable when the -Xms option is not specified in JAVA_OPTS . The value of this variable is used to calculate the default initial heap memory size based on the maximum heap memory. If this variable is used in a container without a memory constraint, the variable has no effect. If this variable is used in a container that does have a memory constraint, the value of -Xms is set to the specified ratio of the -Xmx memory. The default value, 25 means that 25% of the maximum memory is used as the initial heap size. To skip calculation of initial memory, set the value of this variable to 0 . No -Xms option will be added to JAVA_OPTS . JAVA_MAX_INITIAL_MEM 4096 4096 -Xms JAVA_MAX_INITIAL_MEM environment variable is now deprecated, use JAVA_OPTS to provide -Xms option. For example, JAVA_OPTS=-Xms256m JAVA_DIAGNOSTICS true false (disabled) -Xlog:gc:utctime -XX:NativeMemoryTracking=summary Set the value of this variable to true to include diagnostic information in standard output when events occur. If this variable is defined as true in an environment where JAVA_DIAGNOSTICS has already been defined as true , diagnostics are still included. DEBUG true false -agentlib:jdwp=transport=dt_socket,address=USDDEBUG_PORT,server=y,suspend=n Enables remote debugging. DEBUG_PORT 8787 8787 -agentlib:jdwp=transport=dt_socket,address=USDDEBUG_PORT,server=y,suspend=n Specifies the port used for debugging. GC_MIN_HEAP_FREE_RATIO 20 10 -XX:MinHeapFreeRatio Minimum percentage of heap free after garbage collection to avoid expansion. GC_MAX_HEAP_FREE_RATIO 40 20 -XX:MaxHeapFreeRatio Maximum percentage of heap free after garbage collection to avoid shrinking. GC_TIME_RATIO 4 4 -XX:GCTimeRatio Specifies the ratio of the time spent outside of garbage collection (for example, time spent in application execution) to the time spent in garbage collection. GC_ADAPTIVE_SIZE_POLICY_WEIGHT 90 90 -XX:AdaptiveSizePolicyWeight The weighting given to the current garbage collection time versus the garbage collection times. GC_METASPACE_SIZE 20 96 -XX:MetaspaceSize The initial metaspace size. GC_MAX_METASPACE_SIZE 100 No default -XX:MaxMetaspaceSize The maximum metaspace size. GC_CONTAINER_OPTIONS -XX:+UserG1GC -XX:-UseParallelGC -XX:-UseParallelGC Specifies the Java garbage collection to use. The value of the variable is specified by using the Java Runtime Environment (JRE) command-line options. The specified JRE command overrides the default. The following environment variables are deprecated: JAVA_OPTIONS : Use JAVA_OPTS . INITIAL_HEAP_PERCENT : Use JAVA_INITIAL_MEM_RATIO . CONTAINER_HEAP_PERCENT : Use JAVA_MAX_MEM_RATIO . 7.4. Default datasource The datasource ExampleDS is not available in JBoss EAP 8.0. Some quickstarts require this datasource: cmt thread-racing Applications developed by customers might also require the ExampleDS datasource. If you need the default datasource, use the ENABLE_GENERATE_DEFAULT_DATASOURCE environment variable to include it when provisioning a JBoss EAP server. Note This environment variable works only when cloud-default-config galleon layer is used.
[ "ENABLE_GENERATE_DEFAULT_DATASOURCE=true" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_configuring-the-jvm-to-run-your-eap-application_default
5.5. Displaying and Modifying the Attribute List
5.5. Displaying and Modifying the Attribute List By default, the Referential Integrity plug-in is set up to check for and update the member , uniquemember , owner , and seeAlso attributes. You can add or delete attributes to be updated using the command line or the web console. Note Attributes set in the Referential Integrity plug-in's parameter list, must have equality indexing on all databases. Otherwise, the plug-in scans every entry of the database for matching the deleted or modified DN. This can have a significant performance impact. For details about checking and creating indexes, see Section 13.2, "Creating Standard Indexes" . 5.5.1. Displaying the Attribute List Using the Command Line To display the attribute list using the command line: 5.5.2. Displaying the Attribute List Using the Web Console To display the attribute list using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the Referential Integrity plug-in. See the Membership Attribute field for the list of attributes. 5.5.3. Configuring the Attribute List Using the Command Line To update the attribute list using the command line: Optionally, display the current list of attributes. See Section 5.5.1, "Displaying the Attribute List Using the Command Line" . Update the attribute list: To set an attribute list that should be checked and updated by the plug-in: To delete all attributes that should no longer be checked and updated by the plug-in: Restart the instance: 5.5.4. Configuring the Attribute List Using the Web Console To update the attribute list using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the Referential Integrity plug-in. Update the Membership Attribute field to set the attributes. To add an attribute, enter the name into the Membership Attribute field. To remove an attribute, press the X button right to the attribute's name in the Membership Attribute field. Press Save Config .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity show", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --membership-attr attribute_name_1 attribute_name_2", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --membership-attr delete", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/displaying_and_modifying_the_attribute_list
21.11. virt-sysprep: Resetting Virtual Machine Settings
21.11. virt-sysprep: Resetting Virtual Machine Settings The virt-sysprep command-line tool can be used to reset or unconfigure a guest virtual machine so that clones can be made from it. This process involves removing SSH host keys, removing persistent network MAC configuration, and removing user accounts. Virt-sysprep can also customize a virtual machine, for instance by adding SSH keys, users or logos. Each step can be enabled or disabled as required. To use virt-sysprep , the guest virtual machine must be offline, so you must shut it down before running the commands. Note that virt-sysprep modifies the guest or disk image in place without making a copy of it. If you want to preserve the existing contents of the guest virtual machine, you must snapshot, copy or clone the disk first. For more information on copying and cloning disks, see libguestfs.org . It is recommended not to use virt-sysprep as root, unless you need root in order to access the disk image. In such a case, however, it is better to change the permissions on the disk image to be writable by the non-root user running virt-sysprep . To install virt-sysprep , enter the following command: # yum install /usr/bin/virt-sysprep The following command options are available to use with virt-sysprep : Table 21.1. virt-sysprep commands Command Description Example --help Displays a brief help entry about a particular command or about the virt-sysprep command. For additional help, see the virt-sysprep man page. virt-sysprep --help -a [ file ] or --add [ file ] Adds the specified file , which should be a disk image from a guest virtual machine. The format of the disk image is auto-detected. To override this and force a particular format, use the --format option. virt-sysprep --add /dev/vms/disk.img -a [ URI ] or --add [ URI ] Adds a remote disk. The URI format is compatible with guestfish. For more information, see Section 21.4.2, "Adding Files with guestfish" . virt-sysprep -a rbd://example.com[:port]/pool/disk -c [ URI ] or --connect [ URI ] Connects to the given URI, if using libvirt . If omitted, then it connects via the KVM hypervisor. If you specify guest block devices directly ( virt-sysprep -a ), then libvirt is not used at all. virt-sysprep -c qemu:///system -d [ guest ] or --domain [ guest ] Adds all the disks from the specified guest virtual machine. Domain UUIDs can be used instead of domain names. virt-sysprep --domain 90df2f3f-8857-5ba9-2714-7d95907b1c9e -n or --dry-run Performs a read-only "dry run" sysprep operation on the guest virtual machine. This runs the sysprep operation, but throws away any changes to the disk at the end. virt-sysprep -n --enable [ operations ] Enables the specified operations . To list the possible operations, use the --list command. virt-sysprep --enable ssh-hostkeys,udev-persistent-net --operation or --operations Chooses which sysprep operations to perform. To disable an operation, use the - before the operation name. virt-sysprep --operations ssh-hotkeys,udev-persistent-net would enable both operations, while virt-sysprep --operations firewall-rules,-tmp-files would enable the firewall-rules operation and disable the tmp-files operation. For a list of valid operations, see libguestfs.org . --format [ raw | qcow2 | auto ] The default for the -a option is to auto-detect the format of the disk image. Using this forces the disk format for -a options that follow on the command line. Using --format auto switches back to auto-detection for subsequent -a options (see the -a command above). virt-sysprep --format raw -a disk.img forces raw format (no auto-detection) for disk.img, but virt-sysprep --format raw -a disk.img --format auto -a another.img forces raw format (no auto-detection) for disk.img and reverts to auto-detection for another.img . If you have untrusted raw-format guest disk images, you should use this option to specify the disk format. This avoids a possible security problem with malicious guests. --list-operations List the operations supported by the virt-sysprep program. These are listed one per line, with one or more single-space-separated fields. The first field in the output is the operation name, which can be supplied to the --enable flag. The second field is a * character if the operation is enabled by default, or is blank if not. Additional fields on the same line include a description of the operation. virt-sysprep --list-operations --mount-options Sets the mount options for each mount point in the guest virtual machine. Use a semicolon-separated list of mountpoint:options pairs. You may need to place quotes around this list to protect it from the shell. virt-sysprep --mount-options "/:notime" will mount the root directory with the notime operation. -q or --quiet Prevents the printing of log messages. virt-sysprep -q -v or --verbose Enables verbose messages for debugging purposes. virt-sysprep -v -V or --version Displays the virt-sysprep version number and exits. virt-sysprep -V --root-password Sets the root password. Can either be used to specify the new password explicitly, or to use the string from the first line of a selected file, which is more secure. virt-sysprep --root-password password: 123456 -a guest.img or virt-sysprep --root-password file: SOURCE_FILE_PATH -a guest.img For more information, see the libguestfs documentation .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-using_virt_sysprep
Part VII. Troubleshoot
Part VII. Troubleshoot
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-troubleshoot-reorg
8.34. efibootmgr
8.34. efibootmgr 8.34.1. RHBA-2013:1687 - efibootmgr bug fix update Updated efibootmgr packages that fix one bug are now available for Red Hat Enterprise Linux 6. The efibootmgr utility is responsible for the boot loader installation on Unified Extensible Firmware Interface (UEFI) systems. Bug Fix BZ# 924892 Previously, when an invalid value was passed to the "efibootmgr -o" command, the command did not recognize the problem and passed the incorrect value to other functions. This could have lead to several complications such as commands becoming unresponsive. With this update, efibootmgr has been modified to test for invalid input. As a result, an error message is displayed in the aforementioned scenario. Users of efibootmgr are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/efibootmgr
Chapter 2. Installing RHEL 9 for SAP Solutions
Chapter 2. Installing RHEL 9 for SAP Solutions Before installing RHEL 9 for SAP Solutions, verify that the system fulfills the requirements of the SAP software, e.g., regarding the RAM size, the swap space, and the storage. For RHEL 9 systems running the SAP HANA database, you must use a RHEL 9 minor release for which the E4S repos are available and which is supported by SAP. For RHEL 9 systems running the SAP ABAP Platform, you can use any RHEL 9 minor release. You can install RHEL 9 in interactive mode or you can perform an unattended installation using Kickstart. This document explains how to perform an interactive installation. Please take a look at the product documentation for Red Hat Enterprise Linux 9 for further guidance on how to install RHEL 9. Prerequisites You have downloaded the installation image for the desired and supported RHEL 9 minor release from the Red Hat Customer Portal ( Red Hat Enterprise Linux for x86_64 and Red Hat Enterprise Linux for Power ). You have verified that the desired hostname meets the requirements for SAP HANA database system or for SAP ABAP Platform systems . Your server meets the hardware requirements or Infrastructure as a Service (IaaS) configurations. For bare metal deployment, verify that your server type is mentioned in the SAP Certified and Supported SAP HANA Hardware Directory and that it matches the minimum hardware requirements in the SAP HANA Server Installation and Update Guide . For certified IaaS Platforms, see the Certified IaaS Platforms on the SAP Certified and Supported SAP HANA Hardware Directory . Procedure Boot your server from the RHEL 9 installation source. For more information on how to boot your server, see Performing a standard RHEL 9 installation . The following screen appears: Select the language to be used during the installation process and click Continue . The following screen will appear: Under LOCALIZATION , select the desired keyboard layout, language(s) of the installed system, and time and date. Under SOFTWARE , click Software Selection . In the Software Selection window, select Server as your Base Environment and click Done . Note Do not select any additional software. Under SYSTEM , click Installation Destination . In the Installation Destination window, select the storage configuration according to the requirements of the SAP software and according to your needs and click Done . Note For a test system, you can remove the default /home file system allocation and use a large root ( / ) file system. Under SYSTEM , click Network & Host Name , and configure your network connection. Under USER SETTINGS , click Root Password and/or User Creation to configure the initial user(s) for your system. In the screens which show up, click Done once you have entered the necessary user information to return to the main installation screen again. Click Begin Installation . The following screen confirms that the installation is ongoing: Once RHEL 9 is successfully installed, the screen will look like this: Click Reboot System . Additional resources SAP note 3108316 - Red Hat Enterprise Linux 9.x: Installation and Configuration SAP note 3108302 - SAP HANA DB: Recommended OS Settings for RHEL 9 SAP HANA Server Installation and Update Guide
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/installing_rhel_9_for_sap_solutions/proc_installing-rhel-9_configuring-rhel-9-for-sap-hana2-installation
function::symdata
function::symdata Name function::symdata - Return the kernel symbol and module offset for the address Synopsis Arguments addr The address to translate Description Returns the (function) symbol name associated with the given address if known, the offset from the start and size of the symbol, plus module name (between brackets). If symbol is unknown, but module is known, the offset inside the module, plus the size of the module is added. If any element is not known it will be omitted and if the symbol name is unknown it will return the hex string for the given address.
[ "symdata:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-symdata
Chapter 9. Advanced Configuration
Chapter 9. Advanced Configuration This chapter describes advanced resource types and advanced configuration features that Pacemaker supports. 9.1. Resource Clones You can clone a resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group. Note Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time. 9.1.1. Creating and Removing a Cloned Resource You can create a resource and a clone of that resource at the same time with the following command. The name of the clone will be resource_id -clone . You cannot create a resource group and a clone of that resource group in a single command. Alternately, you can create a clone of a previously-created resource or resource group with the following command. The name of the clone will be resource_id -clone or group_name -clone . Note You need to configure resource configuration changes on one node only. Note When configuring constraints, always use the name of the group or clone. When you create a clone of a resource, the clone takes on the name of the resource with -clone appended to the name. The following commands creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone . Note When you create a resource or resource group clone that will be ordered after another clone, you should almost always set the interleave=true option. This ensures that copies of the dependent clone can stop or start when the clone it depends on has stopped or started on the same node. If you do not set this option, if a cloned resource B depends on a cloned resource A and a node leaves the cluster, when the node returns to the cluster and resource A starts on that node, then all of the copies of resource B on all of the nodes will restart. This is because when a dependent cloned resource does not have the interleave option set, all instances of that resource depend on any running instance of the resource it depends on. Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself. For information on resource options, see Section 6.1, "Resource Creation" . Table 9.1, "Resource Clone Options" describes the options you can specify for a cloned resource. Table 9.1. Resource Clone Options Field Description priority, target-role, is-managed Options inherited from resource that is being cloned, as described in Table 6.3, "Resource Meta Options" . clone-max How many copies of the resource to start. Defaults to the number of nodes in the cluster. clone-node-max How many copies of the resource can be started on a single node; the default value is 1 . notify When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false , true . The default value is false . globally-unique Does each copy of the clone perform a different function? Allowed values: false , true If the value of this option is false , these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine. If the value of this option is true , a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false . ordered Should the copies be started in series (instead of in parallel). Allowed values: false , true . The default value is false . interleave Changes the behavior of ordering constraints (between clones/masters) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false , true . The default value is false . clone-min If a value is specified, any clones which are ordered after this clone will not be able to start until the specified number of instances of the original clone are running, even if the interleave option is set to true . 9.1.2. Clone Constraints In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently to those for regular resources except that the clone's id must be used. The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1 . Ordering constraints behave slightly differently for clones. In the example below, because the interleave clone option is left to default as false , no instance of webfarm-stats will start until all instances of webfarm-clone that need to be started have done so. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself. Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally. The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone . 9.1.3. Clone Stickiness To achieve a stable allocation pattern, clones are slightly sticky by default. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster.
[ "pcs resource create resource_id standard:provider:type | type [ resource options ] clone [meta clone_options ]", "pcs resource clone resource_id | group_name [ clone_options ]", "pcs resource create webfarm apache clone", "pcs resource unclone resource_id | group_name", "pcs constraint location webfarm-clone prefers node1", "pcs constraint order start webfarm-clone then webfarm-stats", "pcs constraint colocation add webfarm-stats with webfarm-clone" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-advancedresource-haar
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as either 4.9 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 3.3. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-standalone-multicloud-object-gateway
Deploying OpenShift Data Foundation on VMware vSphere
Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.9 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere/index
Chapter 5. CIDR range definitions
Chapter 5. CIDR range definitions If your cluster uses OVN-Kubernetes, you must specify non-overlapping ranges for Classless Inter-Domain Routing (CIDR) subnet ranges. Important For Red Hat OpenShift Service on AWS 4.17 and later versions, clusters use 169.254.0.0/17 for IPv4 and fd69::/112 for IPv6 as the default masquerade subnet. These ranges should also be avoided by users. For upgraded clusters, there is no change to the default masquerade subnet. The following subnet types and are mandatory for a cluster that uses OVN-Kubernetes: Join: Uses a join switch to connect gateway routers to distributed routers. A join switch reduces the number of IP addresses for a distributed router. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the join switch. Masquerade: Prevents collisions for identical source and destination IP addresses that are sent from a node as hairpin traffic to the same node after a load balancer makes a routing decision. Transit: A transit switch is a type of distributed switch that spans across all nodes in the cluster. A transit switch routes traffic between different zones. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the transit switch. Note You can change the join, masquerade, and transit CIDR ranges for your cluster as a post-installation task. When specifying subnet CIDR ranges, ensure that the subnet CIDR range is within the defined Machine CIDR. You must verify that the subnet CIDR ranges allow for enough IP addresses for all intended workloads depending on which platform the cluster is hosted. OVN-Kubernetes, the default network provider in Red Hat OpenShift Service on AWS 4.14 and later versions, internally uses the following IP address subnet ranges: V4JoinSubnet : 100.64.0.0/16 V6JoinSubnet : fd98::/64 V4TransitSwitchSubnet : 100.88.0.0/16 V6TransitSwitchSubnet : fd97::/64 defaultV4MasqueradeSubnet : 169.254.0.0/17 defaultV6MasqueradeSubnet : fd69::/112 Important The list includes join, transit, and masquerade IPv4 and IPv6 address subnets. If your cluster uses OVN-Kubernetes, do not include any of these IP address subnet ranges in any other CIDR definitions in your cluster or infrastructure. 5.1. Machine CIDR In the Machine classless inter-domain routing (CIDR) field, you must specify the IP address range for machines or cluster nodes. Note Machine CIDR ranges cannot be changed after creating your cluster. This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix /25 , is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix /24 , is supported for deployments that use multiple availability zones. The default is 10.0.0.0/16 . This range must not conflict with any connected networks. Note When using ROSA with HCP, the static IP address 172.20.0.1 is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address. 5.2. Service CIDR In the Service CIDR field, you must specify the IP address range for services. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 172.30.0.0/16 . 5.3. Pod CIDR In the pod CIDR field, you must specify the IP address range for pods. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 10.128.0.0/14 . 5.4. Host Prefix In the Host Prefix field, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine. For example, if the host prefix is set to /23 , each machine is assigned a /23 subnet from the pod CIDR address range. The default is /23 , allowing 512 cluster nodes, and 512 pods per node (both of which are beyond our maximum supported).
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/cidr-range-definitions
8.180. policycoreutils
8.180. policycoreutils 8.180.1. RHBA-2014:1625 - policycoreutils bug fix update Updated policycoreutils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The policycoreutils packages contain the core utilities that are required for the basic operation of a Security-Enhanced Linux (SELinux) system and its policies. Bug Fix BZ# 1148800 A new "noreload" option has been implemented for semanage commands in Red Hat Enterprise Linux 6.6. However, due to a missing reload initialization in the semanageRecords() function, users could not enable a Boolean directly using seobject python module coming from the policycoreutils-python utility. This bug has been fixed, and users can now set the Boolean correctly also using the seobject python module. Users of policycoreutils are advised to upgrade to these updated packages, which fix this bug. 8.180.2. RHBA-2014:1569 - policycoreutils bug fix update Updated policycoreutils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The policycoreutils packages contain the core utilities that are required for the basic operation of a Security-Enhanced Linux (SELinux) system and its policies. Bug Fixes BZ# 885526 An attempt to use the SELinux graphical utility to create a new SELinux policy with a name that contained the dash character ("_") failed with an error. The underlying source code has been modified to fix this bug and the error is no longer returned in the described scenario. As a result, it is possible to create SELinux policies with names containing "_". BZ# 913175 The "sandbox -M" command failed to start when the home directory was linked with a symbolic link. This bug has been fixed and sandbox now properly works with home directories linked with symbolic links. BZ# 961805 Certain option descriptions were missing from the sandbox(8) and restorecon(8) manual pages. The descriptions have been added to those manual pages. BZ# 1002209 The "semanage fcontext -a -e [source_directory] [target_directory]" command sets the same SELinux file context for the target directory as the source directory has. When the user specified the name of the source directory with the trailing slash character ("/") at the end, the command failed to change the context. This update applies a patch to fix this bug and the command now works as expected. BZ# 1028202 When running the "semanage permissive -a [type]" command with an incorrect domain type, an invalid .te file was generated and stored. Consequently, an attempt to execute the command again with the valid domain type failed because semanage tried to compile the previously generated invalid .te file. This bug has been fixed and semanage now works as expected. BZ# 1032828 The semanage "-N" option was not supported and an error was returned when trying to use the option. This update adds the support for the "-N" option. BZ# 1043969 The "fixfiles restore", "fixfiles check", and "fixfiles validate" commands can be executed with or without specifying a directory. Previously, when the aforementioned commands were run with no directory specified, they returned a non-zero value. This behavior is incorrect because no error was encountered. The underlying source code has been modified to fix this bug and the commands no longer return a non-zero value in the described scenario. BZ# 1086456 Due to an incorrect handling of parameters in the setfiles code, the setfiles command did not check the legality of all given parameters. With this update, the code has been modified and setfiles now correctly checks the legality of the given parameters. BZ# 1086572 When the setfiles utility was executed with a non-existent directory specified, the command was supposed to return an error message but it did not. The underlying source code has been modified to fix this bug and the command now properly returns the error message in the described scenario. BZ# 1091139 This update removes the incorrectly working sandbox "-c" option. BZ# 1098062 The setfiles "-d" option shows what specification matches each file. The setfiles "-q" option suppresses a non-error output. Previously, it was possible to specify both options in one setfiles command, even though the options were contrary to each other. With this update, the options have been marked as mutually exclusive. As a result, an attempt to execute them at once fails and an error message is returned. BZ# 1119726 An attempt to run the semanage command with the "-i" argument specified failed with a traceback. The underlying source code has been modified to fix this bug and "semanage -i" now works as expected. Users of policycoreutils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/policycoreutils
probe::nfs.fop.read
probe::nfs.fop.read Name probe::nfs.fop.read - NFS client read operation Synopsis nfs.fop.read Values devname block device name Description SystemTap uses the vfs.do_sync_read probe to implement this probe and as a result will get operations other than the NFS client read operations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-read
Chapter 9. Policy enforcers
Chapter 9. Policy enforcers Policy Enforcement Point (PEP) is a design pattern and as such you can implement it in different ways. Red Hat build of Keycloak provides all the necessary means to implement PEPs for different platforms, environments, and programming languages. Red Hat build of Keycloak Authorization Services presents a RESTful API, and leverages OAuth2 authorization capabilities for fine-grained authorization using a centralized authorization server. A PEP is responsible for enforcing access decisions from the Red Hat build of Keycloak server where these decisions are taken by evaluating the policies associated with a protected resource. It acts as a filter or interceptor in your application in order to check whether or not a particular request to a protected resource can be fulfilled based on the permissions granted by these decisions. Red Hat build of Keycloak provides built-in support for enabling the Red Hat build of Keycloak Policy Enforcer to Java applications with built-in support to secure JakartaEE-compliant frameworks and web containers. If you are using Maven, you should configure the following dependency to your project: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{keycloak.version}</version> </dependency> When you enable the policy enforcer all requests sent to your application are intercepted and access to protected resources will be granted depending on the permissions granted by Red Hat build of Keycloak to the identity making the request. Policy enforcement is strongly linked to your application's paths and the resources you created for a resource server using the Red Hat build of Keycloak Administration Console. By default, when you create a resource server, Red Hat build of Keycloak creates a default configuration for your resource server so you can enable policy enforcement quickly. 9.1. Configuration The policy enforcer configuration uses a JSON format and most of the time you don't need to set anything if you want to automatically resolve the protected paths based on the resources available from your resource server. If you want to manually define the resources being protected, you can use a slightly more verbose format: { "enforcement-mode" : "ENFORCING", "paths": [ { "path" : "/users/*", "methods" : [ { "method": "GET", "scopes" : ["urn:app.com:scopes:view"] }, { "method": "POST", "scopes" : ["urn:app.com:scopes:create"] } ] } ] } The following is a description of each configuration option: enforcement-mode Specifies how policies are enforced. ENFORCING (default mode) Requests are denied by default even when no policy is associated with a given resource. PERMISSIVE Requests are allowed even when no policy is associated with a given resource. DISABLED Completely disables the evaluation of policies and allows access to any resource. When enforcement-mode is DISABLED , applications are still able to obtain all permissions granted by Red Hat build of Keycloak through the Authorization Context on-deny-redirect-to Defines a URL where a client request is redirected when an "access denied" message is obtained from the server. By default, the adapter responds with a 403 HTTP status code. path-cache Defines how the policy enforcer should track associations between paths in your application and resources defined in Red Hat build of Keycloak. The cache is needed to avoid unnecessary requests to a Red Hat build of Keycloak server by caching associations between paths and protected resources. lifespan Defines the time in milliseconds when the entry should be expired. If not provided, default value is 30000 . A value equal to 0 can be set to completely disable the cache. A value equal to -1 can be set to disable the expiry of the cache. max-entries Defines the limit of entries that should be kept in the cache. If not provided, default value is 1000 . paths Specifies the paths to protect. This configuration is optional. If not defined, the policy enforcer discovers all paths by fetching the resources you defined to your application in Red Hat build of Keycloak, where these resources are defined with URIS representing some paths in your application. name The name of a resource on the server that is to be associated with a given path. When used in conjunction with a path , the policy enforcer ignores the resource's URIS property and uses the path you provided instead. path (required) A URI relative to the application's context path. If this option is specified, the policy enforcer queries the server for a resource with a URI with the same value. Currently a very basic logic for path matching is supported. Examples of valid paths are: Wildcards: /* Suffix: /*.html Sub-paths: /path/* Path parameters: /resource/{id} Exact match: /resource Patterns: /{version}/resource, /api/{version}/resource, /api/{version}/resource/* methods The HTTP methods (for example, GET, POST, PATCH) to protect and how they are associated with the scopes for a given resource in the server. method The name of the HTTP method. scopes An array of strings with the scopes associated with the method. When you associate scopes with a specific method, the client trying to access a protected resource (or path) must provide an RPT that grants permission to all scopes specified in the list. For example, if you define a method POST with a scope create , the RPT must contain a permission granting access to the create scope when performing a POST to the path. scopes-enforcement-mode A string referencing the enforcement mode for the scopes associated with a method. Values can be ALL or ANY . If ALL , all defined scopes must be granted in order to access the resource using that method. If ANY , at least one scope should be granted in order to gain access to the resource using that method. By default, enforcement mode is set to ALL . enforcement-mode Specifies how policies are enforced. ENFORCING (default mode) Requests are denied by default even when there is no policy associated with a given resource. DISABLED claim-information-point Defines a set of one or more claims that must be resolved and pushed to the Red Hat build of Keycloak server in order to make these claims available to policies. See Claim Information Point for more details. lazy-load-paths Specifies how the adapter should fetch the server for resources associated with paths in your application. If true , the policy enforcer is going to fetch resources on-demand accordingly with the path being requested. This configuration is specially useful when you do not want to fetch all resources from the server during deployment (in case you have provided no paths ) or in case you have defined only a sub set of paths and want to fetch others on-demand. http-method-as-scope Specifies how scopes should be mapped to HTTP methods. If set to true , the policy enforcer will use the HTTP method from the current request to check whether or not access should be granted. When enabled, make sure your resources in Red Hat build of Keycloak are associated with scopes representing each HTTP method you are protecting. claim-information-point Defines a set of one or more global claims that must be resolved and pushed to the Red Hat build of Keycloak server in order to make these claims available to policies. See Claim Information Point for more details. 9.2. Claim Information Point A Claim Information Point (CIP) is responsible for resolving claims and pushing these claims to the Red Hat build of Keycloak server in order to provide more information about the access context to policies. They can be defined as a configuration option to the policy-enforcer in order to resolve claims from different sources, such as: HTTP Request (parameters, headers, body, etc) External HTTP Service Static values defined in configuration Any other source by implementing the Claim Information Provider SPI When pushing claims to the Red Hat build of Keycloak server, policies can base decisions not only on who a user is but also by taking context and contents into account, based on who, what, why, when, where, and which for a given transaction. It is all about Contextual-based Authorization and how to use runtime information in order to support fine-grained authorization decisions. 9.2.1. Obtaining information from the HTTP request Here are several examples showing how you can extract claims from an HTTP request: keycloak.json { "paths": [ { "path": "/protected/resource", "claim-information-point": { "claims": { "claim-from-request-parameter": "{request.parameter['a']}", "claim-from-header": "{request.header['b']}", "claim-from-cookie": "{request.cookie['c']}", "claim-from-remoteAddr": "{request.remoteAddr}", "claim-from-method": "{request.method}", "claim-from-uri": "{request.uri}", "claim-from-relativePath": "{request.relativePath}", "claim-from-secure": "{request.secure}", "claim-from-json-body-object": "{request.body['/a/b/c']}", "claim-from-json-body-array": "{request.body['/d/1']}", "claim-from-body": "{request.body}", "claim-from-static-value": "static value", "claim-from-multiple-static-value": ["static", "value"], "param-replace-multiple-placeholder": "Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']}" } } } ] } 9.2.2. Obtaining information from an external HTTP service Here are several examples showing how you can extract claims from an external HTTP Service: keycloak.json { "paths": [ { "path": "/protected/resource", "claim-information-point": { "http": { "claims": { "claim-a": "/a", "claim-d": "/d", "claim-d0": "/d/0", "claim-d-all": [ "/d/0", "/d/1" ] }, "url": "http://mycompany/claim-provider", "method": "POST", "headers": { "Content-Type": "application/x-www-form-urlencoded", "header-b": [ "header-b-value1", "header-b-value2" ], "Authorization": "Bearer {keycloak.access_token}" }, "parameters": { "param-a": [ "param-a-value1", "param-a-value2" ], "param-subject": "{keycloak.access_token['/sub']}", "param-user-name": "{keycloak.access_token['/preferred_username']}", "param-other-claims": "{keycloak.access_token['/custom_claim']}" } } } } ] } 9.2.3. Static claims keycloak.json { "paths": [ { "path": "/protected/resource", "claim-information-point": { "claims": { "claim-from-static-value": "static value", "claim-from-multiple-static-value": ["static", "value"] } } } ] } 9.2.4. Claim information provider SPI The Claim Information Provider SPI can be used by developers to support different claim information points in case none of the built-ins providers are enough to address their requirements. For example, to implement a new CIP provider you need to implement org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory and ClaimInformationPointProvider and also provide the file META-INF/services/org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory in your application`s classpath. Example of org.keycloak.adapters.authorization.ClaimInformationPointProviderFactory : public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return "my-claims"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } } Every CIP provider must be associated with a name, as defined above in the MyClaimInformationPointProviderFactory.getName method. The name will be used to map the configuration from the claim-information-point section in the policy-enforcer configuration to the implementation. When processing requests, the policy enforcer will call the MyClaimInformationPointProviderFactory.create method in order to obtain an instance of MyClaimInformationPointProvider. When called, any configuration defined for this particular CIP provider (via claim-information-point) is passed as a map. Example of ClaimInformationPointProvider : public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } } 9.3. Obtaining the authorization context When policy enforcement is enabled, the permissions obtained from the server are available through org.keycloak.AuthorizationContext . This class provides several methods you can use to obtain permissions and ascertain whether a permission was granted for a particular resource or scope. Obtaining the Authorization Context in a Servlet Container HttpServletRequest request = // obtain javax.servlet.http.HttpServletRequest AuthorizationContext authzContext = (AuthorizationContext) request.getAttribute(AuthorizationContext.class.getName()); Note The authorization context helps give you more control over the decisions made and returned by the server. For example, you can use it to build a dynamic menu where items are hidden or shown depending on the permissions associated with a resource or scope. if (authzContext.hasResourcePermission("Project Resource")) { // user can access the Project Resource } if (authzContext.hasResourcePermission("Admin Resource")) { // user can access administration resources } if (authzContext.hasScopePermission("urn:project.com:project:create")) { // user can create new projects } The AuthorizationContext represents one of the main capabilities of Red Hat build of Keycloak Authorization Services. From the examples above, you can see that the protected resource is not directly associated with the policies that govern them. Consider some similar code using role-based access control (RBAC): if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects } Although both examples address the same requirements, they do so in different ways. In RBAC, roles only implicitly define access for their resources. With Red Hat build of Keycloak, you gain the capability to create more manageable code that focuses directly on your resources whether you are using RBAC, attribute-based access control (ABAC), or any other BAC variant. Either you have the permission for a given resource or scope, or you do not have that permission. Now, suppose your security requirements have changed and in addition to project managers, PMOs can also create new projects. Security requirements change, but with Red Hat build of Keycloak there is no need to change your application code to address the new requirements. Once your application is based on the resource and scope identifier, you need only change the configuration of the permissions or policies associated with a particular resource in the authorization server. In this case, the permissions and policies associated with the Project Resource and/or the scope urn:project.com:project:create would be changed. 9.4. Using the AuthorizationContext to obtain an Authorization Client Instance The AuthorizationContext can also be used to obtain a reference to the Authorization Client API configured to your application: ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient(); In some cases, resource servers protected by the policy enforcer need to access the APIs provided by the authorization server. With an AuthzClient instance in hands, resource servers can interact with the server in order to create resources or check for specific permissions programmatically. 9.5. JavaScript integration The Red Hat build of Keycloak Server comes with a JavaScript library you can use to interact with a resource server protected by a policy enforcer. This library is based on the Red Hat build of Keycloak JavaScript adapter, which can be integrated to allow your client to obtain permissions from a Red Hat build of Keycloak Server. You can obtain this library from a running a Red Hat build of Keycloak Server instance by including the following script tag in your web page: <script src="http://.../js/keycloak-authz.js"></script> , you can create a KeycloakAuthorization instance as follows: const keycloak = ... // obtain a Keycloak instance from keycloak.js library const authorization = new KeycloakAuthorization(keycloak); The keycloak-authz.js library provides two main features: Obtain permissions from the server using a permission ticket, if you are accessing a UMA protected resource server. Obtain permissions from the server by sending the resources and scopes the application wants to access. In both cases, the library allows you to easily interact with both resource server and Red Hat build of Keycloak Authorization Services to obtain tokens with permissions your client can use as bearer tokens to access the protected resources on a resource server. 9.5.1. Handling authorization responses from a UMA-Protected resource server If a resource server is protected by a policy enforcer, it responds to client requests based on the permissions carried along with a bearer token. Typically, when you try to access a resource server with a bearer token that is lacking permissions to access a protected resource, the resource server responds with a 401 status code and a WWW-Authenticate header. HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="USD{realm}", as_uri="https://USD{host}:USD{port}/realms/USD{realm}", ticket="016f84e8-f9b9-11e0-bd6f-0021cc6004de" See UMA Authorization Process for more information. What your client needs to do is extract the permission ticket from the WWW-Authenticate header returned by the resource server and use the library to send an authorization request as follows: // prepare a authorization request with the permission ticket const authorizationRequest = {}; authorizationRequest.ticket = ticket; // send the authorization request, if successful retry the request Identity.authorization.authorize(authorizationRequest).then(function (rpt) { // onGrant }, function () { // onDeny }, function () { // onError }); The authorize function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization was successful and the server returned an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. Most applications should use the onGrant callback to retry a request after a 401 response. Subsequent requests should include the RPT as a bearer token for retries. 9.5.2. Obtaining entitlements The keycloak-authz.js library provides an entitlement function that you can use to obtain an RPT from the server by providing the resources and scopes your client wants to access. Example about how to obtain an RPT with permissions for all resources and scopes the user can access authorization.entitlement('my-resource-server-id').then(function (rpt) { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server }); Example about how to obtain an RPT with permissions for specific resources and scopes authorization.entitlement('my-resource-server', { "permissions": [ { "id" : "Some Resource" } ] }).then(function (rpt) { // onGrant }); When using the entitlement function, you must provide the client_id of the resource server you want to access. The entitlement function is completely asynchronous and supports a few callback functions to receive notifications from the server: onGrant : The first argument of the function. If authorization was successful and the server returned an RPT with the requested permissions, the callback receives the RPT. onDeny : The second argument of the function. Only called if the server has denied the authorization request. onError : The third argument of the function. Only called if the server responds unexpectedly. 9.5.3. Authorization request Both authorize and entitlement functions accept an authorization request object. This object can be set with the following properties: permissions An array of objects representing the resource and scopes. For instance: const authorizationRequest = { "permissions": [ { "id" : "Some Resource", "scopes" : ["view", "edit"] } ] } metadata An object where its properties define how the authorization request should be processed by the server. response_include_resource_name A boolean value indicating to the server if resource names should be included in the RPT's permissions. If false, only the resource identifier is included. response_permissions_limit An integer N that defines a limit for the amount of permissions an RPT can have. When used together with rpt parameter, only the last N requested permissions will be kept in the RPT submit_request A boolean value indicating whether the server should create permission requests to the resources and scopes referenced by a permission ticket. This parameter will only take effect when used together with the ticket parameter as part of a UMA authorization process. 9.5.4. Obtaining the RPT If you have already obtained an RPT using any of the authorization functions provided by the library, you can always obtain the RPT as follows from the authorization object (assuming that it has been initialized by one of the techniques shown earlier): const rpt = authorization.rpt; 9.6. Configuring TLS/HTTPS When the server is using HTTPS, ensure your policy enforcer is configured as follows: { "truststore": "path_to_your_trust_store", "truststore-password": "trust_store_password" } The configuration above enables TLS/HTTPS to the Authorization Client, making possible to access a Red Hat build of Keycloak Server remotely using the HTTPS scheme. Note It is strongly recommended that you enable TLS/HTTPS when accessing the Red Hat build of Keycloak Server endpoints.
[ "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{keycloak.version}</version> </dependency>", "{ \"enforcement-mode\" : \"ENFORCING\", \"paths\": [ { \"path\" : \"/users/*\", \"methods\" : [ { \"method\": \"GET\", \"scopes\" : [\"urn:app.com:scopes:view\"] }, { \"method\": \"POST\", \"scopes\" : [\"urn:app.com:scopes:create\"] } ] } ] }", "{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-request-parameter\": \"{request.parameter['a']}\", \"claim-from-header\": \"{request.header['b']}\", \"claim-from-cookie\": \"{request.cookie['c']}\", \"claim-from-remoteAddr\": \"{request.remoteAddr}\", \"claim-from-method\": \"{request.method}\", \"claim-from-uri\": \"{request.uri}\", \"claim-from-relativePath\": \"{request.relativePath}\", \"claim-from-secure\": \"{request.secure}\", \"claim-from-json-body-object\": \"{request.body['/a/b/c']}\", \"claim-from-json-body-array\": \"{request.body['/d/1']}\", \"claim-from-body\": \"{request.body}\", \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"], \"param-replace-multiple-placeholder\": \"Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']}\" } } } ] }", "{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"http\": { \"claims\": { \"claim-a\": \"/a\", \"claim-d\": \"/d\", \"claim-d0\": \"/d/0\", \"claim-d-all\": [ \"/d/0\", \"/d/1\" ] }, \"url\": \"http://mycompany/claim-provider\", \"method\": \"POST\", \"headers\": { \"Content-Type\": \"application/x-www-form-urlencoded\", \"header-b\": [ \"header-b-value1\", \"header-b-value2\" ], \"Authorization\": \"Bearer {keycloak.access_token}\" }, \"parameters\": { \"param-a\": [ \"param-a-value1\", \"param-a-value2\" ], \"param-subject\": \"{keycloak.access_token['/sub']}\", \"param-user-name\": \"{keycloak.access_token['/preferred_username']}\", \"param-other-claims\": \"{keycloak.access_token['/custom_claim']}\" } } } } ] }", "{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"] } } } ] }", "public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return \"my-claims\"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } }", "public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } }", "HttpServletRequest request = // obtain javax.servlet.http.HttpServletRequest AuthorizationContext authzContext = (AuthorizationContext) request.getAttribute(AuthorizationContext.class.getName());", "if (authzContext.hasResourcePermission(\"Project Resource\")) { // user can access the Project Resource } if (authzContext.hasResourcePermission(\"Admin Resource\")) { // user can access administration resources } if (authzContext.hasScopePermission(\"urn:project.com:project:create\")) { // user can create new projects }", "if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects }", "ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient();", "<script src=\"http://.../js/keycloak-authz.js\"></script>", "const keycloak = ... // obtain a Keycloak instance from keycloak.js library const authorization = new KeycloakAuthorization(keycloak);", "HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm=\"USD{realm}\", as_uri=\"https://USD{host}:USD{port}/realms/USD{realm}\", ticket=\"016f84e8-f9b9-11e0-bd6f-0021cc6004de\"", "// prepare a authorization request with the permission ticket const authorizationRequest = {}; authorizationRequest.ticket = ticket; // send the authorization request, if successful retry the request Identity.authorization.authorize(authorizationRequest).then(function (rpt) { // onGrant }, function () { // onDeny }, function () { // onError });", "authorization.entitlement('my-resource-server-id').then(function (rpt) { // onGrant callback function. // If authorization was successful you'll receive an RPT // with the necessary permissions to access the resource server });", "authorization.entitlement('my-resource-server', { \"permissions\": [ { \"id\" : \"Some Resource\" } ] }).then(function (rpt) { // onGrant });", "const authorizationRequest = { \"permissions\": [ { \"id\" : \"Some Resource\", \"scopes\" : [\"view\", \"edit\"] } ] }", "const rpt = authorization.rpt;", "{ \"truststore\": \"path_to_your_trust_store\", \"truststore-password\": \"trust_store_password\" }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/authorization_services_guide/enforcer_overview
Installing an on-premise cluster with the Agent-based Installer
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.17 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
[ "apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test", "apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8", "- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"", "apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1", "cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "oc adm release mirror", "To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release", "spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"", "mkdir ~/<directory_name>", "cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF", "openshift-install --dir <install_directory> agent create image", "./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2", "................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete", "openshift-install --dir <install_directory> agent wait-for install-complete 1", "................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com", "./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug", "ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded", "ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz", "./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug", "export KUBECONFIG=<install_directory>/auth/kubeconfig", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_directory>", "./openshift-install version", "./openshift-install 4.17.0 built from commit abc123def456 release image quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0 release architecture amd64", "oc adm release info <release_image> -o jsonpath=\"{ .metadata.metadata}\" 1", "{\"release.openshift.io architecture\":\"multi\"}", "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF", "mkdir <installation_directory>/openshift", "variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install agent create cluster-manifests --dir <installation_directory>", "cd <installation_directory>/cluster-manifests", "cd ../mirror", "openshift-install agent create cluster-manifests --dir <installation_directory>", "cd <installation_directory>/cluster-manifests", "diskEncryption: enableOn: all 1 mode: tang 2 tangServers: \"server1\": \"http://tang-server-1.example.com:7500\" 3", "openshift-install --dir <install_directory> agent create image", "virt-install --name <vm_name> --autostart --memory=<memory> --cpu host --vcpus=<vcpus> --cdrom <agent_iso_image> \\ 1 --disk pool=default,size=<disk_pool_size> --network network:default,mac=<mac_address> --graphics none --noautoconsole --os-variant rhel9.0 --wait=-1", "./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2", "................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete", "openshift-install --dir <install_directory> agent wait-for install-complete 1", "................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com", "apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.17 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.17 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key>", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.17 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.17.0-0.nightly-2022-06-06-025509", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" macAddress: 52:54:01:aa:aa:a1", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret>", "./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug", "ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded", "ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz", "./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug", "export KUBECONFIG=<install_directory>/auth/kubeconfig", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_directory>", "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL>", "openshift-install agent create pxe-files", "boot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuz", "ai.ip_cfg_override=1", "rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> 1 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> 2 rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 ignition.firstboot ignition.platform.id=metal random.trust_cpu=on", "rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=<rootfs_url> \\ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \\ 2 zfcp.allow_lun_scan=0 \\ 3 ai.ip_cfg_override=1 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\"", "ipl c", "virt-install --name <vm_name> --autostart --ram=16384 --cpu host --vcpus=8 --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \\ 1 --disk <qcow_image_path> --network network:macvtap ,mac=<mac_address> --graphics none --noautoconsole --wait=-1 --extra-args \"rd.neednet=1 nameserver=<nameserver>\" --extra-args \"ip=<IP>::<nameserver>::<hostname>:enc1:none\" --extra-args \"coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img\" --extra-args \"random.trust_cpu=on rd.luks.options=discard\" --extra-args \"ignition.firstboot ignition.platform.id=metal\" --extra-args \"console=tty1 console=ttyS1,115200n8\" --extra-args \"coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8\" --osinfo detect=on,require=off", "rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \\ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \\// random.trust_cpu=on rd.luks.options=discard", "boot-artifacts ├─ agent.s390x-generic.ins ├─ agent.s390x-initrd.addrsize ├─ agent.s390x-rootfs.img └─ agent.s390x-kernel.img └─ agent.s390x-rootfs.img", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.17 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8", "oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>", "imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------", "openshift-install agent create cluster-manifests", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "<assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml", "openshift-install agent create image --dir <assets_directory>", "openshift-install agent wait-for install-complete --dir <assets_directory>", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem", "oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m", "The `devicePath` is an example and may vary depending on the actual hardware configuration used.", "apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.17\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.17.0-x86_64", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true", "oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m", "oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: baremetal: clusterProvisioningIP:", "platform: baremetal: provisioningNetwork:", "platform: baremetal: provisioningMACAddress:", "platform: baremetal: provisioningNetworkCIDR:", "platform: baremetal: provisioningNetworkInterface:", "platform: baremetal: provisioningDHCPRange:", "platform: baremetal: hosts:", "platform: baremetal: hosts: name:", "platform: baremetal: hosts: bootMACAddress:", "platform: baremetal: hosts: bmc:", "platform: baremetal: hosts: bmc: username:", "platform: baremetal: hosts: bmc: password:", "platform: baremetal: hosts: bmc: address:", "platform: baremetal: hosts: bmc: disableCertificateVerification:", "platform: vsphere:", "platform: vsphere: failureDomains:", "platform: vsphere: failureDomains: name:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: computeCluster:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: failureDomains: topology template:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "apiVersion:", "metadata:", "metadata: name:", "rendezvousIP:", "bootArtifactsBaseURL:", "additionalNTPSources:", "hosts:", "hosts: hostname:", "hosts: interfaces:", "hosts: interfaces: name:", "hosts: interfaces: macAddress:", "hosts: role:", "hosts: rootDeviceHints:", "hosts: rootDeviceHints: deviceName:", "hosts: networkConfig:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_an_on-premise_cluster_with_the_agent-based_installer/index
Chapter 22. Post-installation security hardening
Chapter 22. Post-installation security hardening RHEL is designed with robust security features enabled by default. However, you can enhance its security further through additional hardening measures. For more information about: Installing security updates and displaying additional details about the updates to keep your RHEL systems secured against newly discovered threats and vulnerabilities, see Managing and monitoring security updates . Processes and practices for securing RHEL servers and workstations against local and remote intrusion, exploitation, and malicious activity, see Security hardening . Control how users and processes interact with the files on the system or control which users can perform which actions by mapping them to specific SELinux confined users, see Using SELinux . Tools and techniques to improve the security of your networks and lower the risks of data breaches and intrusions, see Securing networks . Packet filters, such as firewalls, that use rules to control incoming, outgoing, and forwarded network traffic, see Using and configuring firewalld and Getting started with nftables .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/post-installation-security-hardening_rhel-installer
function::ctime
function::ctime Name function::ctime - Convert seconds since epoch into human readable date/time string. Synopsis Arguments epochsecs Number of seconds since epoch (as returned by gettimeofday_s ). General Syntax ctime:string(epochsecs:long) Description Takes an argument of seconds since the epoch as returned by gettimeofday_s . Returns a string of the form " Wed Jun 30 21:49:08 1993 " The string will always be exactly 24 characters. If the time would be unreasonable far in the past (before what can be represented with a 32 bit offset in seconds from the epoch) the returned string will be " a long, long time ago... " . If the time would be unreasonable far in the future the returned string will be " far far in the future... " (both these strings are also 24 characters wide). Note that the epoch (zero) corresponds to " Thu Jan 1 00:00:00 1970 " The earliest full date given by ctime, corresponding to epochsecs -2147483648 is " Fri Dec 13 20:45:52 1901 " . The latest full date given by ctime, corresponding to epochsecs 2147483647 is " Tue Jan 19 03:14:07 2038 " . The abbreviations for the days of the week are 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', and 'Sat'. The abbreviations for the months are 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', and 'Dec'. Note that the real C library ctime function puts a newline ('\n') character at the end of the string that this function does not. Also note that since the kernel has no concept of timezones, the returned time is always in GMT.
[ "function ctime:string(epochsecs:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ctime
Chapter 24. Setting up a Remote Diskless System
Chapter 24. Setting up a Remote Diskless System To set up a basic remote diskless system booted over PXE, you need the following packages: tftp-server xinetd dhcp syslinux dracut-network Note After installing the dracut-network package, add the following line to /etc/dracut.conf : Remote diskless system booting requires both a tftp service (provided by tftp-server ) and a DHCP service (provided by dhcp ). The tftp service is used to retrieve kernel image and initrd over the network via the PXE loader. Note SELinux is only supported over NFSv4.2. To use SELinux, NFS must be explicitly enabled in /etc/sysconfig/nfs by adding the line: RPCNFSDARGS="-V 4.2" Then, in /var/lib/tftpboot/pxelinux.cfg/default , change root=nfs:server-ip:/exported/root/directory to root=nfs:server-ip:/exported/root/directory,vers=4.2 . Finally, reboot the NFS server. The following sections outline the necessary procedures for deploying remote diskless systems in a network environment. Important Some RPM packages have started using file capabilities (such as setcap and getcap ). However, NFS does not currently support these so attempting to install or update any packages that use file capabilities will fail. 24.1. Configuring a tftp Service for Diskless Clients Prerequisites Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System Procedure To configure tftp , perform the following steps: Procedure 24.1. To Configure tftp Enable PXE booting over the network: The tftp root directory ( chroot ) is located in /var/lib/tftpboot . Copy /usr/share/syslinux/pxelinux.0 to /var/lib/tftpboot/ : Create a pxelinux.cfg directory inside the tftp root directory: Configure firewall rules to allow tftp traffic. As tftp supports TCP wrappers, you can configure host access to tftp in the /etc/hosts.allow configuration file. For more information on configuring TCP wrappers and the /etc/hosts.allow configuration file, see the Red Hat Enterprise Linux 7 Security Guide . The hosts_access (5) also provides information about /etc/hosts.allow . Steps After configuring tftp for diskless clients, configure DHCP, NFS, and the exported file system accordingly. For instructions on configuring the DHCP, NFS, and the exported file system, see Section 24.2, "Configuring DHCP for Diskless Clients" and Section 24.3, "Configuring an Exported File System for Diskless Clients" .
[ "add_dracutmodules+=\" nfs\"", "systemctl enable --now tftp", "cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/", "mkdir -p /var/lib/tftpboot/pxelinux.cfg/" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-disklesssystems
Chapter 11. Virtual machine templates
Chapter 11. Virtual machine templates 11.1. Creating virtual machine templates 11.1.1. About virtual machine templates Preconfigured Red Hat virtual machine templates are listed in the Virtualization Templates page. These templates are available for different versions of Red Hat Enterprise Linux, Fedora, Microsoft Windows 10, and Microsoft Windows Servers. Each Red Hat virtual machine template is preconfigured with the operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). The Templates page displays four types of virtual machine templates: Red Hat Supported templates are fully supported by Red Hat. User Supported templates are Red Hat Supported templates that were cloned and created by users. Red Hat Provided templates have limited support from Red Hat. User Provided templates are Red Hat Provided templates that were cloned and created by users. You can use the filters in the template Catalog to sort the templates by attributes such as boot source availability, operating system, and workload. You cannot edit or delete a Red Hat Supported or Red Hat Provided template. You can clone the template, save it as a custom virtual machine template, and then edit it. You can also create a custom virtual machine template by editing a YAML file example. 11.1.2. About virtual machines and boot sources Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications. Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. 11.1.3. Creating a virtual machine template in the web console You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console. Procedure In the web console, click Virtualization Templates in the side menu. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. 11.1.4. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Source available on the Templates page. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) URL (creates PVC) Clone (creates PVC) Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the options menu beside a template and select Edit boot source . Click Add disk . In the Add disk window, select Use this disk as a boot source . Enter the disk name and select a Source , for example, Blank (creates PVC) or Use an existing PVC . Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Select a Type , for example, Disk or CD-ROM . Optional: Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs. Note Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. Optional: Clear Apply optimized StorageProfile settings to edit the access mode or volume mode. Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed on the Catalog page. You can use this template to create a virtual machine. 11.1.4.1. Virtual machine template fields for adding a boot source The following table describes the fields for Add boot source to template window. This window displays when you click Add source for a virtual machine template on the Virtualization Templates page. Name Parameter Description Boot source type Upload local file (creates PVC) Upload a file from your local device. Supported file types include gz, xz, tar, and qcow2. URL (creates PVC) Import content from an image available from an HTTP or HTTPS endpoint. Obtain the download link URL from the web page where the image download is available and enter that URL link in the Import URL field. Example: For a Red Hat Enterprise Linux image, log on to the Red Hat Customer Portal, access the image download page, and copy the download link URL for the KVM guest image. PVC (creates PVC) Use a PVC that is already available in the cluster and clone it. Registry (creates PVC) Specify the bootable operating system container that is located in a registry and accessible from the cluster. Example: kubevirt/cirros-registry-dis-demo. Source provider Optional field. Add descriptive text about the source for the template or the name of the user who created the template. Example: Red Hat. Advanced Storage settings StorageClass The storage class that is used to create the disk. Access mode Access mode of the persistent volume. Supported access modes are Single User (RWO) , Shared Access (RWX) , Read Only (ROX) . If Single User (RWO) is selected, the disk can be mounted as read/write by a single node. If Shared Access (RWX) is selected, the disk can be mounted as read-write by many nodes. The kubevirt-storage-class-defaults config map provides access mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. Note Shared Access (RWX) is required for some features, such as live migration of virtual machines between nodes. Volume mode Defines whether the persistent volume uses a formatted file system or raw block state. Supported modes are Block and Filesystem . The kubevirt-storage-class-defaults config map provides volume mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. 11.1.5. Additional resources Creating and using boot sources Customizing the storage profile 11.2. Editing virtual machine templates You can edit a virtual machine template in the web console. Note You cannot edit a template provided by the Red Hat Virtualization Operator. If you clone the template, you can edit it. 11.2.1. Editing a virtual machine template in the web console You can edit a virtual machine template by using the OpenShift Container Platform web console or the command line interface. Editing a virtual machine template does not affect virtual machines already created from that template. Procedure Navigate to Virtualization Templates in the web console. Click the Options menu beside a virtual machine template and select the object to edit. To edit a Red Hat template, click the Options menu, select Clone to create a custom template, and then edit the custom template. Note Edit boot source reference is disabled if the template's data source is managed by the DataImportCron custom resource or if the template does not have a data volume reference. Click Save . 11.2.1.1. Adding a network interface to a virtual machine template Use this procedure to add a network interface to a virtual machine template. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . 11.2.1.2. Adding a virtual disk to a virtual machine template Use this procedure to add a virtual disk to a virtual machine template. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Disks tab and then click Add disk . In the Add disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . 11.2.1.3. Editing CD-ROMs for Templates Use the following procedure to edit CD-ROMs for virtual machine templates. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 11.3. Enabling dedicated resources for virtual machine templates Virtual machines can have resources of a node, such as CPU, dedicated to them to improve performance. 11.3.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 11.3.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. 11.3.3. Enabling dedicated resources for a virtual machine template You enable dedicated resources for a virtual machine template in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details page. On the Scheduling tab, click the pencil icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 11.4. Deploying a virtual machine template to a custom namespace Red Hat provides preconfigured virtual machine templates that are installed in the openshift namespace. The ssp-operator deploys virtual machine templates to the openshift namespace by default. Templates in the openshift namespace are publicly available to all users. These templates are listed on the Virtualization Templates page for different operating systems. 11.4.1. Creating a custom namespace for templates You can create a custom namespace that is used to deploy virtual machine templates for use by anyone who has permissions to access those templates. To add templates to a custom namespace, edit the HyperConverged custom resource (CR), add commonTemplatesNamespace to the spec, and specify the custom namespace for the virtual machine templates. After the HyperConverged CR is modified, the ssp-operator populates the templates in the custom namespace. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure Use the following command to create your custom namespace: 11.4.2. Adding templates to a custom namespace The ssp-operator deploys virtual machine templates to the openshift namespace by default. Templates in the openshift namespace are publicly availably to all users. When a custom namespace is created and templates are added to that namespace, you can modify or delete virtual machine templates in the openshift namespace. To add templates to a custom namespace, edit the HyperConverged custom resource (CR) which contains the ssp-operator . Procedure View the list of virtual machine templates that are available in the openshift namespace. USD oc get templates -n openshift Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged View the list of virtual machine templates that are available in the custom namespace. USD oc get templates -n customnamespace Add the commonTemplatesNamespace attribute and specify the custom namespace. Example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1 1 The custom namespace for deploying templates. Save your changes and exit the editor. The ssp-operator adds virtual machine templates that exist in the default openshift namespace to the custom namespace. 11.4.2.1. Deleting templates from a custom namespace To delete virtual machine templates from a custom namespace, remove the commonTemplateNamespace attribute from the HyperConverged custom resource (CR) and delete each template from that custom namespace. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Remove the commonTemplateNamespace attribute. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1 1 The commonTemplatesNamespace attribute to be deleted. Delete a specific template from the custom namespace that was removed. USD oc delete templates -n customnamespace <template_name> Verification Verify that the template was deleted from the custom namespace. USD oc get templates -n customnamespace 11.4.2.2. Additional resources Creating virtual machine templates 11.5. Deleting virtual machine templates You can delete customized virtual machine templates based on Red Hat templates by using the web console. You cannot delete Red Hat templates. 11.5.1. Deleting a virtual machine template in the web console Deleting a virtual machine template permanently removes it from the cluster. Note You can delete customized virtual machine templates. You cannot delete Red Hat-supplied templates. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the Options menu of a template and select Delete template . Click Delete .
[ "oc create namespace <mycustomnamespace>", "oc get templates -n openshift", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "oc get templates -n customnamespace", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1", "oc delete templates -n customnamespace <template_name>", "oc get templates -n customnamespace" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/virtual-machine-templates
Chapter 15. Managing security context constraints
Chapter 15. Managing security context constraints 15.1. About security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions include actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. Security context constraints allow an administrator to control: Whether a pod can run privileged containers with the allowPrivilegedContainer flag. Whether a pod is constrained with the allowPrivilegeEscalation flag. The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Important Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged. 15.1.1. Default security context constraints The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform. Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. During upgrades between some versions of OpenShift Container Platform, the values of the default SCCs are reset to the default values, which discards all customizations to those SCCs. Instead, create new SCCs as needed. Table 15.1. Default security context constraints Security context constraint Description anyuid Provides all features of the restricted SCC, but allows users to run with any UID and any GID. hostaccess Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. hostnetwork Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork . A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly. node-exporter Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. nonroot Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime. privileged Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution. The privileged SCC allows: Users to run privileged pods Pods to mount host directories as volumes Pods to run as any user Pods to run with any MCS label Pods to use the host's IPC namespace Pods to use the host's PID namespace Pods to use any FSGroup Pods to use any supplemental group Pods to use any seccomp profiles Pods to request any capabilities Note Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it. restricted Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. The restricted SCC: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs Requires that a pod is run with a pre-allocated MCS label Allows pods to use any FSGroup Allows pods to use any supplemental group Note The restricted SCC is the most restrictive of the SCCs that ship by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFS to true and allowPrivilegeEscalation to false . 15.1.2. Security context constraints settings Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories: Category Description Controlled by a boolean Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified. Controlled by an allowable set Fields of this type are checked against the set to ensure their value is allowed. Controlled by a strategy Items that have a strategy to generate a value provide: A mechanism to generate the value, and A mechanism to ensure that a specified value falls into the set of allowable values. CRI-O has the following default list of capabilities that are allowed for each container of a pod: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities , defaultAddCapabilities , and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container. Note You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL . 15.1.3. Security context constraints strategies RunAsUser MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser . Example MustRunAs snippet ... runAsUser: type: MustRunAs uid: <id> ... MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range. Example MustRunAsRange snippet ... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ... MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided. Example MustRunAsNonRoot snippet ... runAsUser: type: MustRunAsNonRoot ... RunAsAny - No default provided. Allows any runAsUser to be specified. Example RunAsAny snippet ... runAsUser: type: RunAsAny ... SELinuxContext MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions . RunAsAny - No default provided. Allows any seLinuxOptions to be specified. SupplementalGroups MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. RunAsAny - No default provided. Allows any supplementalGroups to be specified. FSGroup MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. RunAsAny - No default provided. Allows any fsGroup ID to be specified. 15.1.4. Controlling volumes The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs hostPath iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageos vsphereVolume * (A special value to allow the use of all volume types.) none (A special value to disallow the use of all volumes types. Exists only for backwards compatibility.) The recommended minimum set of allowed volumes for new SCCs are configMap , downwardAPI , emptyDir , persistentVolumeClaim , secret , and projected . Note This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform. Note For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes . 15.1.5. Admission control Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user. In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod. The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account. Admission uses the following approach to create the final security context for the pod: Retrieve all SCCs available for use. Generate field values for security context settings that were not specified on the request. Validate the final settings against the available constraints. If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected. A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated: Note These examples are in the context of a strategy using the pre-allocated values. An FSGroup SCC strategy of MustRunAs If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. A SupplementalGroups SCC strategy of MustRunAs If the pod specification defines one or more supplementalGroups IDs, then the pod's IDs must equal one of the IDs in the namespace's openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. 15.1.6. Security context constraints prioritization Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller. A priority value of 0 is the lowest possible priority. A nil priority is considered a 0 , or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting. When the complete set of available SCCs is determined, the SCCs are ordered in the following manner: The highest priority SCCs are ordered first. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive. If both the priorities and restrictions are equal, the SCCs are sorted by name. By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod's SecurityContext . 15.2. About pre-allocated security context constraints values The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod. The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification: A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level. A FSGroup strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. A SupplementalGroups strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy: RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace's default parameter value also appears in the pod's groups. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace's default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range. Note FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created. Note By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3 , the FSGroup strategy configures itself with a minimum and maximum value of 1 . If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation. Note The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end> . The openshift.io/sa.scc.uid-range annotation accepts only a single block. 15.3. Example security context constraints The following examples show the security context constraints (SCC) format and annotations: Annotated privileged SCC allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: 5 - KILL - MKNOD - SETUID - SETGID runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: - '*' 1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities. 2 A list of additional capabilities that are added to any pod. 3 The FSGroup strategy, which dictates the allowable values for the security context. 4 The groups that can access this SCC. 5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities. 6 The runAsUser strategy type, which dictates the allowable values for the Security Context. 7 The seLinuxContext strategy type, which dictates the allowable values for the Security Context. 8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the Security Context. 9 The users who can access this SCC. The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted SCC. Without explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because restricted SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges. With explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request. This configuration is valid for SELinux, fsGroup, and Supplemental Groups. 15.4. Creating security context constraints You can create security context constraints (SCCs) by using the OpenShift CLI ( oc ). Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with the cluster-admin role. Procedure Define the SCC in a YAML file named scc_admin.yaml : SecurityContextConstraints object definition kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL . For example, to create an SCC that drops the KILL , MKNOD , and SYS_CHROOT capabilities, add the following to the SCC object: requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT Note You cannot list a capability in both allowedCapabilities and requiredDropCapabilities . CRI-O supports the same list of capability values that are found in the Docker documentation . Create the SCC by passing in the file: USD oc create -f scc_admin.yaml Example output securitycontextconstraints "scc-admin" created Verification Verify that the SCC was created: USD oc get scc scc-admin Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere] 15.5. Role-based access to security context constraints You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope. Note You cannot assign a SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , openshift . These namespaces should not be used for running pods or services. To include access to SCCs for your role, specify the scc resource when creating a role. USD oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace> This results in the following role definition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use 1 The role's name. 2 Namespace of the defined role. Defaults to default if not specified. 3 The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource. 4 An example name for an SCC you want to have access. 5 Name of the resource group that allows users to specify SCC names in the resourceNames field. 6 A list of verbs to apply to the role. A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name . Note Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted SCC. 15.6. Reference of security context constraints commands You can manage security context constraints (SCCs) in your instance as normal API objects using the OpenShift CLI ( oc ). Note You must have cluster-admin privileges to manage SCCs. 15.6.1. Listing security context constraints To get a current list of SCCs: USD oc get scc Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] 15.6.2. Examining security context constraints You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to. For example, to examine the restricted SCC: USD oc describe scc restricted Example output Name: restricted Priority: <none> Access: Users: <none> 1 Groups: system:authenticated 2 Settings: Allow Privileged: false Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SYS_CHROOT,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> 1 Lists which users and service accounts the SCC is applied to. 2 Lists which groups the SCC is applied to. Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.6.3. Deleting security context constraints To delete an SCC: USD oc delete scc <scc_name> Note If you delete a default SCC, it will regenerate when you restart the cluster. 15.6.4. Updating security context constraints To update an existing SCC: USD oc edit scc <scc_name> Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.
[ "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: 5 - KILL - MKNOD - SETUID - SETGID runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc_admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: system:authenticated 2 Settings: Allow Privileged: false Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SYS_CHROOT,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc delete scc <scc_name>", "oc edit scc <scc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/managing-pod-security-policies
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_rest_api/rhdg-downloads_datagrid
3.5.2. Modifying or Deleting a Fence Device
3.5.2. Modifying or Deleting a Fence Device To modify or delete a fence device, follow these steps: At the detailed menu for the cluster (below the clusters menu), click Shared Fence Devices . Clicking Shared Fence Devices causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: Add a Fence Device and Configure a Fence Device . Click Configure a Fence Device . Clicking Configure a Fence Device causes the display of a list of fence devices under Configure a Fence Device . Click a fence device in the list. Clicking a fence device in the list causes the display of a Fence Device Form page for the fence device selected from the list. Either modify or delete the fence device as follows: To modify the fence device, enter changes to the parameters displayed. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. Click Update this fence device and wait for the configuration to be updated. To delete the fence device, click Delete this fence device and wait for the configuration to be updated. Note You can create shared fence devices on the node configuration page, also. However, you can only modify or delete a shared fence device via Shared Fence Devices at the detailed menu for the cluster (below the clusters menu).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-modify-delete-fence-devices-conga-ca
10.3.2. Establishing a Wireless Connection
10.3.2. Establishing a Wireless Connection This section explains how to use NetworkManager to configure a wireless (also known as Wi-Fi or 802.11 a/b/g/n ) connection to an Access Point. To configure a mobile broadband (such as 3G) connection, see Section 10.3.3, "Establishing a Mobile Broadband Connection" . Quickly Connecting to an Available Access Point The easiest way to connect to an available access point is to left-click on the NetworkManager applet, locate the Service Set Identifier (SSID) of the access point in the list of Available networks, and click on it. If the access point is secured, a dialog prompts you for authentication. Figure 10.9. Authenticating to a wireless access point NetworkManager tries to auto-detect the type of security used by the access point. If there are multiple possibilities, NetworkManager guesses the security type and presents it in the Wireless security dropdown menu. To see if there are multiple choices, click the Wireless security dropdown menu and select the type of security the access point is using. If you are unsure, try connecting to each type in turn. Finally, enter the key or passphrase in the Password field. Certain password types, such as a 40-bit WEP or 128-bit WPA key, are invalid unless they are of a requisite length. The Connect button will remain inactive until you enter a key of the length required for the selected security type. To learn more about wireless security, see Section 10.3.9.2, "Configuring Wireless Security" . Note In the case of WPA and WPA2 (Personal and Enterprise), an option to select between Auto, WPA and WPA2 has been added. This option is intended for use with an access point that is offering both WPA and WPA2. Select one of the protocols if you would like to prevent roaming between the two protocols. Roaming between WPA and WPA2 on the same access point can cause loss of service. If NetworkManager connects to the access point successfully, its applet icon will change into a graphical indicator of the wireless connection's signal strength. Figure 10.10. Applet icon indicating a wireless connection signal strength of 75% You can also edit the settings for one of these auto-created access point connections just as if you had added it yourself. The Wireless tab of the Network Connections window lists all of the connections you have ever tried to connect to: NetworkManager names each of them Auto <SSID> , where SSID is the Service Set identifier of the access point. Figure 10.11. An example of access points that have previously been connected to Connecting to a Hidden Wireless Network All access points have a Service Set Identifier (SSID) to identify them. However, an access point may be configured not to broadcast its SSID, in which case it is hidden , and will not show up in NetworkManager 's list of Available networks. You can still connect to a wireless access point that is hiding its SSID as long as you know its SSID, authentication method, and secrets. To connect to a hidden wireless network, left-click NetworkManager 's applet icon and select Connect to Hidden Wireless Network to cause a dialog to appear. If you have connected to the hidden network before, use the Connection dropdown to select it, and click Connect . If you have not, leave the Connection dropdown as New , enter the SSID of the hidden network, select its Wireless security method, enter the correct authentication secrets, and click Connect . For more information on wireless security settings, see Section 10.3.9.2, "Configuring Wireless Security" . Editing a Connection, or Creating a Completely New One You can edit an existing connection that you have tried or succeeded in connecting to in the past by opening the Wireless tab of the Network Connections , selecting the connection by name (words which follow Auto refer to the SSID of an access point), and clicking Edit . You can create a new connection by opening the Network Connections window, clicking the Add button, selecting Wireless , and clicking the Create button. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button. Select the Wireless entry from the list. Click the Create button. Figure 10.12. Editing the newly created Wireless connection 1 Configuring the Connection Name, Auto-Connect Behavior, and Availability Settings Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the Wireless section of the Network Connections window. By default, wireless connections are named the same as the SSID of the wireless access point. You can rename the wireless connection without affecting its ability to connect, but it is recommended to retain the SSID name. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Configuring the Wireless Tab SSID All access points have a Service Set identifier to identify them. However, an access point may be configured not to broadcast its SSID, in which case it is hidden , and will not show up in NetworkManager 's list of Available networks. You can still connect to a wireless access point that is hiding its SSID as long as you know its SSID (and authentication secrets). For information on connecting to a hidden wireless network, see the section called "Connecting to a Hidden Wireless Network" . Mode Infrastructure - Set Mode to Infrastructure if you are connecting to a dedicated wireless access point or one built into a network device such as a router or a switch. Ad-hoc - Set Mode to Ad-hoc if you are creating a peer-to-peer network for two or more mobile devices to communicate directly with each other. If you use Ad-hoc mode, referred to as Independent Basic Service Set ( IBSS ) in the 802.11 standard, you must ensure that the same SSID is set for all participating wireless devices, and that they are all communicating over the same channel. BSSID The Basic Service Set Identifier (BSSID) is the MAC address of the specific wireless access point you are connecting to when in Infrastructure mode. This field is blank by default, and you are able to connect to a wireless access point by SSID without having to specify its BSSID . If the BSSID is specified, it will force the system to associate to a specific access point only. For ad-hoc networks, the BSSID is generated randomly by the mac80211 subsystem when the ad-hoc network is created. It is not displayed by NetworkManager MAC address Like an Ethernet Network Interface Card (NIC), a wireless adapter has a unique MAC address (Media Access Control; also known as a hardware address ) that identifies it to the system. Running the ip addr command will show the MAC address associated with each interface. For example, in the following ip addr output, the MAC address for the wlan0 interface (which is 00:1c:bf:02:f8:70 ) immediately follows the link/ether keyword: A single system could have one or more wireless network adapters connected to it. The MAC address field therefore allows you to associate a specific wireless adapter with a specific connection (or connections). As mentioned, you can determine the MAC address using the ip addr command, and then copy and paste that value into the MAC address text-entry field. MTU The MTU (Maximum Transmission Unit) value represents the size in bytes of the largest packet that the connection will use to transmit. If set to a non-zero number, only packets of the specified size or smaller will be transmitted. Larger packets are broken up into multiple Ethernet frames. It is recommended to leave this setting on automatic . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing the wireless connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can successfully connect to your the modified connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for details on selecting and connecting to a network. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: security authentication for the wireless connection, click the Wireless Security tab and proceed to Section 10.3.9.2, "Configuring Wireless Security" ; IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" .
[ "~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 52:54:00:26:9e:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.122.251/24 brd 192.168.122.255 scope global eth0 inet6 fe80::5054:ff:fe26:9ef1/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1c:bf:02:f8:70 brd ff:ff:ff:ff:ff:ff inet 10.200.130.67/24 brd 10.200.130.255 scope global wlan0 inet6 fe80::21c:bfff:fe02:f870/64 scope link valid_lft forever preferred_lft forever" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-establishing_a_wireless_connection
Chapter 2. Installing the AMQ Streams operator from the OperatorHub
Chapter 2. Installing the AMQ Streams operator from the OperatorHub You can install and subscribe to the AMQ Streams operator using the OperatorHub in the OpenShift Container Platform web console. This procedure describes how to create a project and install the AMQ Streams operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions. Warning Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing AMQ Streams from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. Procedure Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation. We use a project named amq-streams-kafka in this example. Navigate to the Operators > OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Red Hat Integration - AMQ Streams operator. The operator is located in the Streaming & Messaging category. Click Red Hat Integration - AMQ Streams to display the operator information. Read the information about the operator and click Install . On the Install Operator page, choose from the following installation and update options: Update Channel : Choose the update channel for the operator. The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable. An amq-streams- X .x channel contains the minor and micro release updates for a major release, where X is the major release version number. An amq-streams- X.Y .x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number. Installation Mode : Choose the project you created to install the operator on a specific namespace. You can install the AMQ Streams operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other AMQ Streams components. Update approval : By default, the AMQ Streams operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information, see the Operators guide in the OpenShift documentation. Click Install to install the operator to your selected namespace. The AMQ Streams operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace. The status will show as Succeeded . You can now use the AMQ Streams operator to deploy Kafka components, starting with a Kafka cluster.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/getting_started_with_amq_streams_on_openshift/proc-deploying-cluster-operator-hub-str
Chapter 133. Spring RabbitMQ
Chapter 133. Spring RabbitMQ Since Camel 3.8 Both producer and consumer are supported The Spring RabbitMQ component allows you to produce and consume messages from RabbitMQ instances using the Spring RabbitMQ client. 133.1. Dependencies When using spring-rabbitmq with Red Hat build of Camel Spring Boot use the following Maven dependency to enable support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-rabbitmq-starter</artifactId> </dependency> The version is specified using BOM in the following way. <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 133.2. URI format The exchangeName determines the exchange to which the produced messages are sent to. In the case of consumers, the exchangeName determines the exchange the queue is bound to. 133.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 133.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 133.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 133.4. Component Options The Spring RabbitMQ component supports 29 options that are listed below. Name Description Default Type amqpAdmin (common) Autowired Optional AMQP Admin service to use for auto declaring elements (queues, exchanges, bindings). AmqpAdmin connectionFactory (common) Autowired The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean autoDeclare (consumer) Specifies whether the consumer should auto declare binding between exchange, queue and routing key when starting. Enabling this can be good for development to make it easy to standup exchanges, queues and bindings on the broker. false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean deadLetterExchange (consumer) The name of the dead letter exchange. String deadLetterExchangeType (consumer) The type of the dead letter exchange. Enum values: direct fanout headers topic direct String deadLetterQueue (consumer) The name of the dead letter queue. String deadLetterRoutingKey (consumer) The routing key for the dead letter exchange. String maximumRetryAttempts (consumer) How many times a Rabbitmq consumer will retry the same message if Camel failed to process the message. 5 int rejectAndDontRequeue (consumer) Whether a Rabbitmq consumer should reject the message without requeuing. This enables failed messages to be sent to a Dead Letter Exchange/Queue, if the broker is so configured. true boolean retryDelay (consumer) Delay in msec a Rabbitmq consumer will wait before redelivering a message that Camel failed to process. 1000 int concurrentConsumers (consumer (advanced)) The number of consumers. 1 int errorHandler (consumer (advanced)) To use a custom ErrorHandler for handling exceptions from the message listener (consumer). ErrorHandler listenerContainerFactory (consumer (advanced)) To use a custom factory for creating and configuring ListenerContainer to be used by the consumer for receiving messages. ListenerContainerFactory maxConcurrentConsumers (consumer (advanced)) The maximum number of consumers (available only with SMLC). Integer messageListenerContainerType (consumer (advanced)) The type of the MessageListenerContainer. Enum values: DMLC SMLC DMLC String prefetchCount (consumer (advanced)) Tell the broker how many messages to send to each consumer in a single request. Often this can be set quite high to improve throughput. 250 int retry (consumer (advanced)) Custom retry configuration to use. If this is configured then the other settings such as maximumRetryAttempts for retry are not in use. RetryOperationsInterceptor shutdownTimeout (consumer (advanced)) The time to wait for workers in milliseconds after the container is stopped. If any workers are active when the shutdown signal comes they will be allowed to finish processing as long as they can finish within this timeout. 5000 long allowNullBody (producer) Whether to allow sending messages with no body. If this option is false and the message body is null, then an MessageConversionException is thrown. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean replyTimeout (producer) Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply messaging. The default value is 5 seconds. A negative value indicates an indefinite timeout. 5000 long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean ignoreDeclarationExceptions (advanced) Switch on ignore exceptions such as mismatched properties when declaring. false boolean messageConverter (advanced) To use a custom MessageConverter so you can be in control how to map to/from a org.springframework.amqp.core.Message. MessageConverter messagePropertiesConverter (advanced) To use a custom MessagePropertiesConverter so you can be in control how to map to/from a org.springframework.amqp.core.MessageProperties. MessagePropertiesConverter headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy 133.5. Endpoint Options The Spring RabbitMQ endpoint is configured using URI syntax: Following are the path and query parameters: 133.5.1. Path Parameters (1 parameters) Name Description Default Type exchangeName (common) Required The exchange name determines the exchange to which the produced messages will be sent to. In the case of consumers, the exchange name determines the exchange the queue will be bound to. Note: to use default exchange then do not use empty name, but use default instead. String 133.5.2. Query Parameters (34 parameters) Name Description Default Type connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the ReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the ReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean routingKey (common) The value of a routing key to use. Default is empty which is not helpful when using the default (or any direct) exchange, but fine if the exchange is a headers exchange for instance. String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgeMode (consumer) Flag controlling the behaviour of the container with respect to message acknowledgement. The most common usage is to let the container handle the acknowledgements (so the listener doesn't need to know about the channel or the message). Set to AcknowledgeMode.MANUAL if the listener will send the acknowledgements itself using Channel.basicAck(long, boolean). Manual acks are consistent with either a transactional or non-transactional channel, but if you are doing no other work on the channel at the same other than receiving a single message then the transaction is probably unnecessary. Set to AcknowledgeMode.NONE to tell the broker not to expect any acknowledgements, and it will assume all messages are acknowledged as soon as they are sent (this is autoack in native Rabbit broker terms). If AcknowledgeMode.NONE then the channel cannot be transactional (so the container will fail on start up if that flag is accidentally set). Enum values: NONE MANUAL AUTO AcknowledgeMode asyncConsumer (consumer) Whether the consumer processes the Exchange asynchronously. If enabled then the consumer may pickup the message from the queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the consumer will pickup the message from the queue. false boolean autoDeclare (consumer) Specifies whether the consumer should auto declare binding between exchange, queue and routing key when starting. true boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean deadLetterExchange (consumer) The name of the dead letter exchange. String deadLetterExchangeType (consumer) The type of the dead letter exchange. Enum values: direct fanout headers topic direct String deadLetterQueue (consumer) The name of the dead letter queue. String deadLetterRoutingKey (consumer) The routing key for the dead letter exchange. String exchangeType (consumer) The type of the exchange. Enum values: direct fanout headers topic direct String exclusive (consumer) Set to true for an exclusive consumer. false boolean maximumRetryAttempts (consumer) How many times a Rabbitmq consumer will retry the same message if Camel failed to process the message. 5 int noLocal (consumer) Set to true for an no-local consumer. false boolean queues (consumer) The queue(s) to use for consuming messages. Multiple queue names can be separated by comma. If none has been configured then Camel will generate an unique id as the queue name for the consumer. String rejectAndDontRequeue (consumer) Whether a Rabbitmq consumer should reject the message without requeuing. This enables failed messages to be sent to a Dead Letter Exchange/Queue, if the broker is so configured. true boolean retryDelay (consumer) Delay in msec a Rabbitmq consumer will wait before redelivering a message that Camel failed to process. 1000 int bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer (advanced)) The number of consumers. Integer exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern maxConcurrentConsumers (consumer (advanced)) The maximum number of consumers (available only with SMLC). Integer messageListenerContainerType (consumer (advanced)) The type of the MessageListenerContainer. Enum values: DMLC SMLC DMLC String prefetchCount (consumer (advanced)) Tell the broker how many messages to send in a single request. Often this can be set quite high to improve throughput. Integer retry (consumer (advanced)) Custom retry configuration to use. If this is configured then the other settings such as maximumRetryAttempts for retry are not in use. RetryOperationsInterceptor replyTimeout (producer) Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply messaging. The default value is 5 seconds. A negative value indicates an indefinite timeout. 5000 long usePublisherConnection (producer) Use a separate connection for publishers and consumers. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean args (advanced) Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each element: arg.consumer. arg.exchange. arg.queue. arg.binding. arg.dlq.exchange. arg.dlq.queue. arg.dlq.binding. For example to declare a queue with message ttl argument: args=arg.queue.x-message-ttl=60000. Map messageConverter (advanced) To use a custom MessageConverter so you can be in control how to map to/from a org.springframework.amqp.core.Message. MessageConverter messagePropertiesConverter (advanced) To use a custom MessagePropertiesConverter so you can be in control how to map to/from a org.springframework.amqp.core.MessageProperties. MessagePropertiesConverter synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean 133.6. Message Headers The Spring RabbitMQ component supports 2 message headers that are listed below: Name Description Default Type CamelSpringRabbitmqRoutingOverrideKey (common) Constant: ROUTING_OVERRIDE_KEY The exchange key. String CamelSpringRabbitmqExchangeOverrideName (common) Constant: EXCHANGE_OVERRIDE_NAME The exchange name. String 133.7. Using a connection factory To connect to RabbitMQ you must setup a ConnectionFactory (same as JMS) with the login details as described below. It is recommended to use CachingConnectionFactory from spring-rabbit as it comes with connection pooling out of the box. <bean id="rabbitConnectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory"> <property name="uri" value="amqp://lolcalhost:5672"/> </bean> The ConnectionFactory is auto-detected by default, so you can just execute it. <camelContext> <route> <from uri="direct:cheese"/> <to uri="spring-rabbitmq:foo?routingKey=cheese"/> </route> </camelContext> 133.8. Default Exchange Name To use default exchange name (which would be an empty exchange name in RabbitMQ) you must use default as name in the endpoint uri, like: to("spring-rabbitmq:default?routingKey=foo") 133.9. Auto declare exchanges, queues and bindings Before you can send or receive messages from RabbitMQ, you must first set up the exchanges, queues and bindings. In development mode, Camel can automatically do this. You can enable this by setting autoDeclare=true on the SpringRabbitMQComponent . Then Spring RabbitMQ automatically declares the elements and sets up the binding between the exchange, queue and routing keys. The elements can be configured using the multi-valued args option. For example to specify the queue as durable and exclusive, you can configure the endpoint uri with arg.queue.durable=true&arg.queue.exclusive=true . Exchanges Option Type Description Default autoDelete boolean True if the server should delete the exchange when it is no longer in use (if all bindings are deleted). false durable boolean A durable exchange will survive a server restart. true You can also configure any additional x- arguments. See details in the RabbitMQ documentation. Queues Option Type Description Default autoDelete boolean True if the server should delete the exchange when it is no longer in use (if all bindings are deleted). false durable boolean A durable queue will survive a server restart. false exclusive boolean Whether the queue is exclusive false x-dead-letter-exchange String The name of the dead letter exchange. If none configured then the component configured value is used. x-dead-letter-routing-key String The routing key for the dead letter exchange. If none configured then the component configured value is used. You can also configure any additional x- arguments, such as the message time to live using x-message-ttl , and many others. See details in the RabbitMQ documentation. 133.10. Mapping from Camel to RabbitMQ The message body is mapped from Camel Message body to a byte[] which is the type that RabbitMQ uses for message body. Camel uses its type converter to convert the message body to byte array. Spring Rabbit comes out of the box with support for mapping Java serialized objects but Camel Spring RabbitMQ does not support this due to security vulnerabilities and using Java objects is a bad design as it enforces strong coupling. Custom message headers is mapped from Camel Message headers to RabbitMQ headers. This behaviour can be customized by configuring a new implementation of HeaderFilterStrategy on the Camel component. 133.11. Request / Reply Request and reply messaging is supported using RabbitMQ direct reply-to . The example below does request/reply, where the message is sent using the cheese exchange name and routing key foo.bar , which is being consumed by the 2nd Camel route, that prepends the message with `Hello `, and then sends back the message. So if we send World as message body to direct:start then, we can see the message being logged log:request ⇒ World log:input ⇒ World log:response ⇒ Hello World from("direct:start") .to("log:request") .to(ExchangePattern.InOut, "spring-rabbitmq:cheese?routingKey=foo.bar") .to("log:response"); from("spring-rabbitmq:cheese?queues=myqueue&routingKey=foo.bar") .to("log:input") .transform(body().prepend("Hello ")); 133.12. Reuse endpoint and send to different destinations computed at runtime If you need to send messages to a lot of different RabbitMQ exchanges, you must reuse an endpoint and specify the real destination in a message header. This allows Camel to reuse the same endpoint, but send to different exchanges. This greatly reduces the number of endpoints created and economizes on memory and thread resources. Note Using toD is easier than specifying the dynamic destination with headers. You can specify using the following headers: Header Type Description CamelSpringRabbitmqExchangeOverrideName String The exchange name. CamelSpringRabbitmqRoutingOverrideKey String The routing key. For example, the following route shows how you can compute a destination at run time and use it to override the exchange appearing in the endpoint URL: from("file://inbox") .to("bean:computeDestination") .to("spring-rabbitmq:dummy"); The exchange name, dummy , is just a placeholder. It must be provided as part of the RabbitMQ endpoint URL, but it is ignored in this example. In the computeDestination bean, specify the real destination by setting the CamelRabbitmqExchangeOverrideName header as follows: public void setExchangeHeader(Exchange exchange) { String region = .... exchange.getIn().setHeader("CamelSpringRabbitmqExchangeOverrideName", "order-" + region); } Camel reads this header and uses it as the exchange name instead of the one configured on the endpoint. So, in this example Camel sends the message to spring-rabbitmq:order-emea , assuming the region value was emea . The producer removes both CamelSpringRabbitmqExchangeOverrideName and CamelSpringRabbitmqRoutingOverrideKey headers from the exchange and do not propagate them to the created Rabbitmq message in order to avoid the accidental loops in the routes (in scenarios when the message is forwarded to another RabbitMQ endpoint). 133.13. Using toD If you need to send messages to a lot of different exchanges, you must reuse an endpoint and specify the dynamic destinations with simple language using toD . For example, you need to send messages to the exchange with order types, then you can use toD as follows: from("direct:order") .toD("spring-rabbit:order-USD{header.orderType}"); 133.14. Spring Boot Auto-Configuration The component supports 30 options that are listed below. Name Description Default Type camel.component.spring-rabbitmq.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an MessageConversionException is thrown. false Boolean camel.component.spring-rabbitmq.amqp-admin Optional AMQP Admin service to use for auto declaring elements (queues, exchanges, bindings). The option is a org.springframework.amqp.core.AmqpAdmin type. AmqpAdmin camel.component.spring-rabbitmq.auto-declare Specifies whether the consumer should auto declare binding between exchange, queue and routing key when starting. Enabling this can be good for development to make it easy to standup exchanges, queues and bindings on the broker. false Boolean camel.component.spring-rabbitmq.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.spring-rabbitmq.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-rabbitmq.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.spring-rabbitmq.concurrent-consumers The number of consumers. 1 Integer camel.component.spring-rabbitmq.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a org.springframework.amqp.rabbit.connection.ConnectionFactory type. ConnectionFactory camel.component.spring-rabbitmq.dead-letter-exchange The name of the dead letter exchange. String camel.component.spring-rabbitmq.dead-letter-exchange-type The type of the dead letter exchange. direct String camel.component.spring-rabbitmq.dead-letter-queue The name of the dead letter queue. String camel.component.spring-rabbitmq.dead-letter-routing-key The routing key for the dead letter exchange. String camel.component.spring-rabbitmq.enabled Whether to enable auto configuration of the spring-rabbitmq component. This is enabled by default. Boolean camel.component.spring-rabbitmq.error-handler To use a custom ErrorHandler for handling exceptions from the message listener (consumer). The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.spring-rabbitmq.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.spring-rabbitmq.ignore-declaration-exceptions Switch on ignore exceptions such as mismatched properties when declaring. false Boolean camel.component.spring-rabbitmq.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.spring-rabbitmq.listener-container-factory To use a custom factory for creating and configuring ListenerContainer to be used by the consumer for receiving messages. The option is a org.apache.camel.component.springrabbit.ListenerContainerFactory type. ListenerContainerFactory camel.component.spring-rabbitmq.max-concurrent-consumers The maximum number of consumers (available only with SMLC). Integer camel.component.spring-rabbitmq.maximum-retry-attempts How many times a Rabbitmq consumer will retry the same message if Camel failed to process the message. 5 Integer camel.component.spring-rabbitmq.message-converter To use a custom MessageConverter so you can be in control how to map to/from a org.springframework.amqp.core.Message. The option is a org.springframework.amqp.support.converter.MessageConverter type. MessageConverter camel.component.spring-rabbitmq.message-listener-container-type The type of the MessageListenerContainer. DMLC String camel.component.spring-rabbitmq.message-properties-converter To use a custom MessagePropertiesConverter so you can be in control how to map to/from a org.springframework.amqp.core.MessageProperties. The option is a org.apache.camel.component.springrabbit.MessagePropertiesConverter type. MessagePropertiesConverter camel.component.spring-rabbitmq.prefetch-count Tell the broker how many messages to send to each consumer in a single request. Often this can be set quite high to improve throughput. 250 Integer camel.component.spring-rabbitmq.reject-and-dont-requeue Whether a Rabbitmq consumer should reject the message without requeuing. This enables failed messages to be sent to a Dead Letter Exchange/Queue, if the broker is so configured. true Boolean camel.component.spring-rabbitmq.reply-timeout Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply messaging. The default value is 5 seconds. A negative value indicates an indefinite timeout. The option is a long type. 5000 Long camel.component.spring-rabbitmq.retry Custom retry configuration to use. If this is configured then the other settings such as maximumRetryAttempts for retry are not in use. The option is a org.springframework.retry.interceptor.RetryOperationsInterceptor type. RetryOperationsInterceptor camel.component.spring-rabbitmq.retry-delay Delay in msec a Rabbitmq consumer will wait before redelivering a message that Camel failed to process. 1000 Integer camel.component.spring-rabbitmq.shutdown-timeout The time to wait for workers in milliseconds after the container is stopped. If any workers are active when the shutdown signal comes they will be allowed to finish processing as long as they can finish within this timeout. The option is a long type. 5000 Long camel.component.spring-rabbitmq.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-rabbitmq-starter</artifactId> </dependency>", "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "spring-rabbitmq:exchangeName?[options]", "spring-rabbitmq:exchangeName", "<bean id=\"rabbitConnectionFactory\" class=\"org.springframework.amqp.rabbit.connection.CachingConnectionFactory\"> <property name=\"uri\" value=\"amqp://lolcalhost:5672\"/> </bean>", "<camelContext> <route> <from uri=\"direct:cheese\"/> <to uri=\"spring-rabbitmq:foo?routingKey=cheese\"/> </route> </camelContext>", "to(\"spring-rabbitmq:default?routingKey=foo\")", "from(\"direct:start\") .to(\"log:request\") .to(ExchangePattern.InOut, \"spring-rabbitmq:cheese?routingKey=foo.bar\") .to(\"log:response\"); from(\"spring-rabbitmq:cheese?queues=myqueue&routingKey=foo.bar\") .to(\"log:input\") .transform(body().prepend(\"Hello \"));", "from(\"file://inbox\") .to(\"bean:computeDestination\") .to(\"spring-rabbitmq:dummy\");", "public void setExchangeHeader(Exchange exchange) { String region = . exchange.getIn().setHeader(\"CamelSpringRabbitmqExchangeOverrideName\", \"order-\" + region); }", "from(\"direct:order\") .toD(\"spring-rabbit:order-USD{header.orderType}\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-rabbitmq-component-starter
Deploying and managing Red Hat Process Automation Manager services
Deploying and managing Red Hat Process Automation Manager services Red Hat Process Automation Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/index
Chapter 8. Migrating from Older Releases of JBoss EAP
Chapter 8. Migrating from Older Releases of JBoss EAP 8.1. Migrating from JBoss EAP 5 to JBoss EAP 7 This guide focuses on the changes that are required to successfully run and deploy JBoss EAP 6 applications on JBoss EAP 7. If you plan to migrate your applications directly from JBoss EAP 5 to JBoss EAP 7, there are a number of resources available to help you plan and execute your migration. We suggest you take the following approach. See Summary of Changes Made to Each Release in this guide for a quick, high-level overview of the changes made to each release of JBoss EAP. Read through the JBoss EAP 6 Migration Guide and this guide to become familiar with the contents of each one. Use the JBoss EAP 5 Component Upgrade Reference as a quick reference to migration information about specific components and features. The rule-based Migration Toolkit for Applications continues to add rules to help you migrate directly from JBoss EAP 5 to JBoss EAP 7. You can use these tools to analyze your application and to generate detailed reports about the changes needed to migrate to JBoss EAP 7. For more information, see Use Migration Toolkit for Applications to Analyze Applications for Migration . The Customer Portal Knowledgebase currently contains articles and solutions to help with migration from JBoss EAP 5 to JBoss EAP 6. There are plans in place to add additional content for migration from JBoss EAP 5 to JBoss EAP 7 over time. 8.2. Summary of Changes Made to Each Release Before you plan your migration, you should be aware of the changes that were made to JBoss EAP 6 and JBoss EAP 7. The JBoss EAP 6 Migration Guide covers changes that were made between JBoss EAP 5 and JBoss EAP 6. The following is a condensed list of the most significant changes made in JBoss EAP 6. Implemented a new architecture built on the Modular Service Container Was a certified implementation of the Java Enterprise Edition 6 specification Introduced domain management, new deployment configuration, and a new file directory structure and scripts Standardized on new portable Java Naming and Directory Interface namespaces See Review What's New and Different in JBoss EAP 6 in the JBoss EAP 6 Migration Guide for a detailed list of changes made in that release. JBoss EAP 7 is built on the same modular structure as JBoss EAP 6 and includes the same domain management, deployment configuration, file directory structure, and scripts. It also still uses the same standardized Java Naming and Directory Interface namespaces. However, JBoss EAP 7 introduces the following changes. Adds support for the Jakarta Enterprise Edition (Jakarta EE) 8 specification Replaces the web server with Undertow Replaces the JacORB IIOP implementation with a downstream branch of the OpenJDK ORB Includes Apache ActiveMQ Artemis as the new messaging provider Removes the cmp , jaxr , and threads subsystems Removes support for enterprise entity beans For a more complete list of changes, see Review What's New in JBoss EAP 7 8.3. Review the Content in the Migration Guides Review the entire contents of the Migration Guide for each release to become aware of the features that were added or deprecated, and to understand the server configuration and the application changes required to run existing applications for that release. Because the underlying architecture was not changed between JBoss EAP 6 and JBoss EAP 7, many of the changes documented in the JBoss EAP 6 Migration Guide still apply. For example, changes documented under Changes Required by Most Applications are related to the underlying architectural changes made in JBoss EAP 6, which still apply to this release. The change to the new modular class loading system is significant and impacts the packaging and dependencies of almost every JBoss EAP 5 application. Many of the changes listed under Changes Dependent on Your Application Architecture and Components are also still valid. However, because JBoss EAP 7 replaced the web server, ORB, and messaging provider, removed the cmp , threads , and jaxr subsystems, and removed support for entity beans, you must consult this guide for any changes related to those component areas. Pay particular attention to the Server Configuration Changes and Application Migration Changes detailed in this guide before you begin. 8.4. JBoss EAP 5 Component Upgrade Reference Use the following table to find information about how to migrate a particular feature or component from JBoss EAP 5 to JBoss EAP 7.4. JBoss EAP 5 Feature or Component Summary of Changes and Where to Find Migration Information Application Packaging and Class Loading In JBoss EAP 6, the hierarchical class loading structure was replaced with a modular architecture based on JBoss Modules. Application packaging also changed due to the new modular class loading structure. This architecture is still used in JBoss EAP 7. For information about the new modular architecture, see the following chapter in the JBoss EAP 7.4 Development Guide . Class Loading and Modules For information about how to update and repackage applications for the new modular architecture, see the following section in the JBoss EAP 6 Migration Guide . Class Loading Changes Application Configuration Files Due to the changes in JBoss EAP 6 to use modular class loading, you might need to create or modify one or more application configuration files to add dependencies or to prevent automatic dependencies from loading. This has not changed in JBoss EAP 7. For details, see the following section in the JBoss EAP 6 Migration Guide . Configuration File Changes Caching and Infinispan JBoss Cache was replaced by Infinispan for internal use by the server only in JBoss EAP 6. See the following sections in the JBoss EAP 6 Migration Guide for information about how to replace JBoss Cache in application code. Cache Changes Infinispan caching strategy and configuration changes for JBoss EAP 7 are documented in the following section of this guide. Infinispan Server Configuration Changes Data Sources and Resource Adapters JBoss EAP 6 consolidated configuration of data sources and resource adapters into mainly one file and this is still true in JBoss EAP 7. See the following section in the JBoss EAP 6 Migration Guide for more information. Datasource and Resource Adapter Configuration Changes Directory Structure, Scripts, and Deployment Configuration In JBoss EAP 6, the directory structure, scripts, and deployment configuration changed. These changes are still valid in JBoss EAP 7. See the following section of the JBoss EAP 6 Migration Guide for more information. Review What's New and Different in JBoss EAP 6 Enterprise beans Your application code must use the enterprise beans 3.x API and Jakarta Persistence. For information about deprecated features and changes required to run Enterprise Beans 2.1, see the following section in the JBoss EAP 6 Migration Guide : EJB 2.x and Earlier Changes In JBoss EAP 6, stateful session bean cache and stateless session bean pool size is configured in the ejb3 subsystem of the server configuration file. The jboss-ejb3.xml deployment descriptor replaces the jboss.xml deployment descriptor file. For more information about these changes, see the following section in the JBoss EAP 6 Migration Guide . EJB Changes The default remote connector and port has changed in JBoss EAP 7. For more information about this and server configuration changes, see the following sections in this guide. Jakarta Enterprise Beans Server Configuration Changes Migrate Jakarta Enterprise Beans Client Code Enterprise bean entity beans are not supported in JBoss EAP 7. For information about how to migrate entity beans to Jakarta Persistence, see the following section in this guide. Migrate Entity Beans to Jakarta Persistence Hibernate and Jakarta Persistence JBoss EAP 7.4 implements Jakarta Persistence 2.2 and includes Hibernate 5.3. It also includes Hibernate Search version 5.10. Other changes include removal of support for Jakarta Enterprise Beans entity beans and additional updates to Jakarta Persistence properties. For information about how these changes impact your applications, see the following sections in this guide. Hibernate and Jakarta Persistence Migration Changes Hibernate Search Changes Migrate Entity Beans to Jakarta Persistence Jakarta Persistence Persistence Property Changes Note Use of a different version of Hibernate than the one shipped with JBoss EAP is unsupported. The version shipped with JBoss EAP is the only version of Hibernate that is tested, and is the only version for which patches will be provided for defects. Jakarta RESTful Web Services and RESTEasy JBoss EAP 7 includes RESTEasy 3 and many classes have been deprecated. The version of Jackson changed from version 1.9.9 to version 2.6.3 or greater. For details about these changes, see the following section in this guide. Jakarta RESTful Web Services and RESTEasy Application Changes JBoss AOP JBoss AOP (Aspect Oriented Programming) was removed in JBoss EAP 6. For information about how to refactor applications that use JBoss AOP, see the following section in the JBoss EAP 6 Migration Guide . JBoss AOP Changes JGroups and Clustering The way you enable clustering and specify bind addresses changed in JBoss EAP 6. See the following section in the JBoss EAP 6 Migration Guide for more information. Clustering Changes In JBoss EAP 7, JGroups now defaults to using a private network interface instead of a public network interface and also introduces <channel> elements to the jgroups subsystem. JBoss EAP 7 also includes the Undertow mod_cluster implementation, introduces a new API for building singleton services, and other new clustering features. These changes are documented in the following sections of this guide. JGroups Server Configuration Changes Application Clustering Changes Java Naming and Directory Interface JBoss EAP 6 implemented a new standardized global Java Naming and Directory Interface namespace and a series of related namespaces that map to the various scopes of an application. See the following section of the JBoss EAP 6 Migration Guide for information about application changes needed to use the new Java Naming and Directory Interface namespace rules. JNDI Changes Jakarta Server Faces On JBoss EAP 6.4, you could configure your application to use the older version. This is no longer possible in JBoss EAP 7.4, which now includes Jakarta Server Faces 2.3. See the following section in this guide for more information. Jakarta Server Faces Code Changes Logging JBoss EAP 6 introduced a new JBoss Logging framework that is still used in JBoss EAP 7. Applications that use third-party logging frameworks might be impacted by the modular class loading changes. Review the following section in the JBoss EAP 6 Migration Guide for information about these changes. Logging Changes In JBoss EAP 7, annotations in the org.jboss.logging package are now deprecated, which impacts source code and Maven GAVs (groupId:artifactId:version). The prefixes for all log messages were also changed. For more information about these changes, see the following sections in this guide. JBoss Logging Changes Logging Message Prefix Changes Messaging and Jakarta Messaging In JBoss EAP 7, ActiveMQ Artemis replaced HornetQ as the built-in messaging provider. The best approach to migrating your messaging configuration is to start with the JBoss EAP 7 default server configuration and use the following guide to apply your current messaging configuration changes. Configuring Messaging for JBoss EAP 7.4 If you want to understand the changes required to move from JBoss Messaging to HornetQ, review the following section of the JBoss EAP 6 Migration Guide . HornetQ Changes Then review the following information about how to migrate the HornetQ configuration and related messaging data in this guide. Messaging Server Configuration Changes Messaging Application Changes ORB In JBoss EAP 6, JacORB configuration was moved from the EAP_HOME /server/production/conf/jacorb.properties file to the server configuration file. JBoss EAP 7 then replaced the JacORB IIOP implementation with a downstream branch of the OpenJDK ORB. The best approach to migrating your ORB configuration is to start with the JBoss EAP 7 default server configuration and use the following section in the JBoss EAP 7.4 Configuration Guide to apply the your current ORB configuration changes. ORB Configuration Remote Invocation A new enterprise bean client API was introduced in JBoss EAP 6 for remote invocations; however, if you preferred not to rewrite your application code to use the new API, you could modify your existing code to use the ejb:BEAN_REFERENCE for remote access to enterprise beans. See the following section in the JBoss EAP 6 Migration Guide for more information. Remote Invocation Changes In JBoss EAP 7, the default connector and default remote connection port changed. For more information, see the following sections in this guide. Update the Remote URL Connector and Port Update External Clients Migrate Jakarta Enterprise Beans Client Code Seam 2.x While official support for Seam 2.2 applications was dropped in JBoss EAP 6, it was still possible to configure dependencies for Seam 2.2 applications to run on that release. JBoss EAP 7.4, which now includes Jakarta Server Faces 2.3 and Hibernate 5.3, does not support Seam 2.2 or Seam 2.3 due to end of life of Red Hat JBoss Web Framework Kit. It is recommended that you rewrite your Seam components using Weld CDI beans. Security Security updates in JBoss EAP 6 included changes to security domain names and changes to how to configure security for basic authentication. The LDAP security realm configuration was moved to the server configuration file. See the following sections in the JBoss EAP 6 Migration Guide for more information. Security Changes LDAP Security Realm Changes Updates that impact security in JBoss EAP 7 include server configuration changes and application changes. Information can be found in the following sections of this guide. Security Server Configuration Changes Security Application Changes Spring Applications Spring 4.2.x is the earliest stable Spring version supported by JBoss EAP 7. For information about Apache CXF Spring web services and Spring RESTEasy integration changes, see the following sections in this guide. Apache CXF Spring Web Services Changes Spring RESTEasy Integration Changes Transactions In JBoss EAP 6, the transaction configuration was consolidated and moved to the server configuration file. Other updates included changes to JTA node identifier settings and how to enable JTS. For details, see the following section in the JBoss EAP 6 Migration Guide . JTS and JTA Changes Some Transaction Manager configuration attributes that were available in the transactions subsystem in JBoss EAP 6 have changed in JBoss EAP 7. For more information, see the following section in this guide. Transactions Subsystem Changes Valves Undertow replaced JBoss Web in JBoss EAP 7 and valves are no longer supported. See the following sections in this guide. Migrate Global Valves Migrate Custom Application Valves Migrate Authenticator Valves Web Services JBoss EAP 6 included JBossWS 4. For information about the changes required by that version update, see the following section in the JBoss EAP 6 Migration Guide . Web Services Changes JBoss EAP 7 introduced JBossWS 5. See the following section in this guide for required updates. Web Services Applications Changes
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/migration_guide/migrating_from_older_releases
Chapter 8. Deleting functions
Chapter 8. Deleting functions You can delete a function. You can do it using the kn func tool. 8.1. Deleting a function You can delete a function by using the kn func delete command. This is useful when a function is no longer required, and can help to save resources on your cluster. Procedure Delete a function: USD kn func delete [<function_name> -n <namespace> -p <path>] If the name or path of the function to delete is not specified, the current directory is searched for a func.yaml file that is used to determine the function to delete. If the namespace is not specified, it defaults to the namespace value in the func.yaml file.
[ "kn func delete [<function_name> -n <namespace> -p <path>]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/functions/serverless-functions-deleting
4.4.6. Removing Logical Volumes
4.4.6. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, you must close the volume with the umount command before removing it. In addition, in a clustered environment you must deactivate a logical volume before it can be removed. The following command removes the logical volume /dev/testvg/testlv . from the volume group testvg . Note that in this case the logical volume has not been deactivated. You could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume.
[ "lvremove /dev/testvg/testlv Do you really want to remove active logical volume \"testlv\"? [y/n]: y Logical volume \"testlv\" successfully removed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LV_remove
Chapter 2. Configuring and deploying the overcloud for autoscaling
Chapter 2. Configuring and deploying the overcloud for autoscaling You must configure the templates for the services on your overcloud that enable autoscaling. Procedure Create environment templates and a resource registry for autoscaling services before you deploy the overcloud for autoscaling. For more information, see Section 2.1, "Configuring the overcloud for autoscaling" Deploy the overcloud. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" 2.1. Configuring the overcloud for autoscaling Create the environment templates and resource registry that you need to deploy the services that provide autoscaling. Procedure Log in to the undercloud host as the stack user. Create a directory for the autoscaling configuration files: USD mkdir -p USDHOME/templates/autoscaling/ Create the resource registry file for the definitions that the services require for autoscaling: USD cat <<EOF > USDHOME/templates/autoscaling/resources-autoscaling.yaml resource_registry: OS::TripleO::Services::AodhApi: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-api-container-puppet.yaml OS::TripleO::Services::AodhEvaluator: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-evaluator-container-puppet.yaml OS::TripleO::Services::AodhListener: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-listener-container-puppet.yaml OS::TripleO::Services::AodhNotifier: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-notifier-container-puppet.yaml OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml OS::TripleO::Services::GnocchiApi: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-api-container-puppet.yaml OS::TripleO::Services::GnocchiMetricd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-metricd-container-puppet.yaml OS::TripleO::Services::GnocchiStatsd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-statsd-container-puppet.yaml OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml EOF Create an environment template to configure the services required for autoscaling: cat <<EOF > USDHOME/templates/autoscaling/parameters-autoscaling.yaml parameter_defaults: NotificationDriver: 'messagingv2' GnocchiDebug: false CeilometerEnableGnocchi: true ManagePipeline: true ManageEventPipeline: true EventPipelinePublishers: - gnocchi://?archive_policy=generic PipelinePublishers: - gnocchi://?archive_policy=generic ManagePolling: true ExtraConfig: ceilometer::agent::polling::polling_interval: 60 EOF If you use Red Hat Ceph Storage as the data storage back end for the time-series database service, add the following parameters to your parameters-autoscaling.yaml file: parameter_defaults: GnocchiRbdPoolName: 'metrics' GnocchiBackend: 'rbd' You must create the defined archive policy generic before you can store metrics. You define this archive policy after the deployment. For more information, see Section 3.1, "Creating the generic archive policy for autoscaling" . Set the polling_interval parameter, for example, 60 seconds. The value of the polling_interval parameter must match the gnocchi granularity value that you defined when you created the archive policy. For more information, see Section 3.1, "Creating the generic archive policy for autoscaling" . Deploy the overcloud. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" 2.2. Deploying the overcloud for autoscaling You can deploy the overcloud for autoscaling by using director or by using a standalone environment. Prerequisites You have created the environment templates for deploying the services that provide autoscaling capabilities. For more information, see Section 2.1, "Configuring the overcloud for autoscaling" . Procedure Section 2.2.1, "Deploying the overcloud for autoscaling by using director" Section 2.2.2, "Deploying the overcloud for autoscaling in a standalone environment" 2.2.1. Deploying the overcloud for autoscaling by using director Use director to deploy the overcloud. If you are using a standalone environment, see Section 2.2.2, "Deploying the overcloud for autoscaling in a standalone environment" . Prerequisites A deployed undercloud. For more information, see Installing director on the undercloud . Procedure Log in to the undercloud as the stack user. Source the stackrc undercloud credentials file: [stack@director ~]USD source ~/stackrc Add the autoscaling environment files to the stack with your other environment files and deploy the overcloud: (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml \ -e USDHOME/templates/autoscaling/resources-autoscaling.yaml 2.2.2. Deploying the overcloud for autoscaling in a standalone environment To test the environment files in a pre-production environment, you can deploy the overcloud with the services required for autoscaling by using a standalone deployment. Note This procedure uses example values and commands that you must change to suit a production environment. If you want to use director to deploy the overcloud for autoscaling, see Section 2.2.1, "Deploying the overcloud for autoscaling by using director" . Prerequisites An all-in-one RHOSP environment has been staged with the python3-tripleoclient. For more information, see Installing the all-in-one Red Hat OpenStack Platform environment . An all-in-one RHOSP environment has been staged with the base configuration. For more information, see Configuring the all-in-one Red Hat OpenStack Platform environment . Procedure Change to the user that manages your overcloud deployments, for example, the stack user: Replace or set the environment variables USDIP , USDNETMASK and USDVIP for the overcloud deployment: USD export IP=192.168.25.2 USD export VIP=192.168.25.3 USD export NETMASK=24 Deploy the overcloud to test and verify the resource and parameter files: USD sudo openstack tripleo deploy \ --templates \ --local-ip=USDIP/USDNETMASK \ --control-virtual-ip=USDVIP \ -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \ -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \ -e USDHOME/containers-prepare-parameters.yaml \ -e USDHOME/standalone_parameters.yaml \ -e USDHOME/templates/autoscaling/resources-autoscaling.yaml \ -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml \ --output-dir USDHOME \ --standalone Export the OS_CLOUD environment variable: USD export OS_CLOUD=standalone Additional resources Director Installation and Usage guide. Standalone Deployment Guide . 2.3. Verifying the overcloud deployment for autoscaling Verify that the autoscaling services are deployed and enabled. Verification output is from a standalone environment, but director-based environments provide similar output. Prerequisites You have deployed the autoscaling services in an existing overcloud using standalone or director. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" . Procedure Log in to your environment as the stack user. For standalone environments set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments, source the overcloudrc overcloud credentials file: [stack@undercloud ~]USD source ~/overcloudrc Verification Verify that the deployment was successful and ensure that the service API endpoints for autoscaling are available: USD openstack endpoint list --service metric +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 2956a12327b744b29abd4577837b2e6f | regionOne | gnocchi | metric | True | internal | http://192.168.25.3:8041 | | 583453c58b064f69af3de3479675051a | regionOne | gnocchi | metric | True | admin | http://192.168.25.3:8041 | | fa029da0e2c047fc9d9c50eb6b4876c6 | regionOne | gnocchi | metric | True | public | http://192.168.25.3:8041 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ USD openstack endpoint list --service alarming +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 08c70ec137b44ed68590f4d5c31162bb | regionOne | aodh | alarming | True | internal | http://192.168.25.3:8042 | | 194042887f3d4eb4b638192a0fe60996 | regionOne | aodh | alarming | True | admin | http://192.168.25.3:8042 | | 2604b693740245ed8960b31dfea1f963 | regionOne | aodh | alarming | True | public | http://192.168.25.3:8042 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ USD openstack endpoint list --service orchestration +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | 00966a24dd4141349e12680307c11848 | regionOne | heat | orchestration | True | admin | http://192.168.25.3:8004/v1/%(tenant_id)s | | 831e411bb6d44f6db9f5103d659f901e | regionOne | heat | orchestration | True | public | http://192.168.25.3:8004/v1/%(tenant_id)s | | d5be22349add43ae95be4284a42a4a60 | regionOne | heat | orchestration | True | internal | http://192.168.25.3:8004/v1/%(tenant_id)s | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ Verify that the services are running on the overcloud: USD sudo podman ps --filter=name='heat|gnocchi|ceilometer|aodh' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31e75d62367f registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api 77acf3487736 registry.redhat.io/rhosp-rhel9/openstack-aodh-listener:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_listener 29ec47b69799 registry.redhat.io/rhosp-rhel9/openstack-aodh-evaluator:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_evaluator 43efaa86c769 registry.redhat.io/rhosp-rhel9/openstack-aodh-notifier:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_notifier 0ac8cb2c7470 registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api_cron 31b55e091f57 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_central 5f61331a17d8 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_compute 7c5ef75d8f1b registry.redhat.io/rhosp-rhel9/openstack-ceilometer-notification:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_notification 88fa57cc1235 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-api:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_api 0f05a58197d5 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-metricd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_metricd 6d806c285500 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-statsd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_statsd 7c02cac34c69 registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cron d3903df545ce registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api db1d33506e3d registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cfn 051446294c70 registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_engine Verify that the time-series database service is available: USD openstack metric status --fit-width +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | metricd/processors | ['standalone-80.general.local.0.a94fbf77-1ac0-49ed-bfe2-a89f014fde01', | | | 'standalone-80.general.local.3.28ca78d7-a80e-4515-8060-233360b410eb', | | | 'standalone-80.general.local.1.7e8b5a5b-2ca1-49be-bc22-25f51d67c00a', | | | 'standalone-80.general.local.2.3c4fe59e-23cd-4742-833d-42ff0a4cb692'] | | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
[ "mkdir -p USDHOME/templates/autoscaling/", "cat <<EOF > USDHOME/templates/autoscaling/resources-autoscaling.yaml resource_registry: OS::TripleO::Services::AodhApi: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-api-container-puppet.yaml OS::TripleO::Services::AodhEvaluator: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-evaluator-container-puppet.yaml OS::TripleO::Services::AodhListener: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-listener-container-puppet.yaml OS::TripleO::Services::AodhNotifier: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-notifier-container-puppet.yaml OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml OS::TripleO::Services::GnocchiApi: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-api-container-puppet.yaml OS::TripleO::Services::GnocchiMetricd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-metricd-container-puppet.yaml OS::TripleO::Services::GnocchiStatsd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-statsd-container-puppet.yaml OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml EOF", "cat <<EOF > USDHOME/templates/autoscaling/parameters-autoscaling.yaml parameter_defaults: NotificationDriver: 'messagingv2' GnocchiDebug: false CeilometerEnableGnocchi: true ManagePipeline: true ManageEventPipeline: true EventPipelinePublishers: - gnocchi://?archive_policy=generic PipelinePublishers: - gnocchi://?archive_policy=generic ManagePolling: true ExtraConfig: ceilometer::agent::polling::polling_interval: 60 EOF", "parameter_defaults: GnocchiRbdPoolName: 'metrics' GnocchiBackend: 'rbd'", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml -e USDHOME/templates/autoscaling/resources-autoscaling.yaml", "su - stack", "export IP=192.168.25.2 export VIP=192.168.25.3 export NETMASK=24", "sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK --control-virtual-ip=USDVIP -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml -e USDHOME/templates/autoscaling/resources-autoscaling.yaml -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml --output-dir USDHOME --standalone", "export OS_CLOUD=standalone", "[stack@standalone ~]USD export OS_CLOUD=standalone", "[stack@undercloud ~]USD source ~/overcloudrc", "openstack endpoint list --service metric +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 2956a12327b744b29abd4577837b2e6f | regionOne | gnocchi | metric | True | internal | http://192.168.25.3:8041 | | 583453c58b064f69af3de3479675051a | regionOne | gnocchi | metric | True | admin | http://192.168.25.3:8041 | | fa029da0e2c047fc9d9c50eb6b4876c6 | regionOne | gnocchi | metric | True | public | http://192.168.25.3:8041 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+", "openstack endpoint list --service alarming +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 08c70ec137b44ed68590f4d5c31162bb | regionOne | aodh | alarming | True | internal | http://192.168.25.3:8042 | | 194042887f3d4eb4b638192a0fe60996 | regionOne | aodh | alarming | True | admin | http://192.168.25.3:8042 | | 2604b693740245ed8960b31dfea1f963 | regionOne | aodh | alarming | True | public | http://192.168.25.3:8042 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+", "openstack endpoint list --service orchestration +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | 00966a24dd4141349e12680307c11848 | regionOne | heat | orchestration | True | admin | http://192.168.25.3:8004/v1/%(tenant_id)s | | 831e411bb6d44f6db9f5103d659f901e | regionOne | heat | orchestration | True | public | http://192.168.25.3:8004/v1/%(tenant_id)s | | d5be22349add43ae95be4284a42a4a60 | regionOne | heat | orchestration | True | internal | http://192.168.25.3:8004/v1/%(tenant_id)s | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+", "sudo podman ps --filter=name='heat|gnocchi|ceilometer|aodh' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31e75d62367f registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api 77acf3487736 registry.redhat.io/rhosp-rhel9/openstack-aodh-listener:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_listener 29ec47b69799 registry.redhat.io/rhosp-rhel9/openstack-aodh-evaluator:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_evaluator 43efaa86c769 registry.redhat.io/rhosp-rhel9/openstack-aodh-notifier:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_notifier 0ac8cb2c7470 registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api_cron 31b55e091f57 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_central 5f61331a17d8 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_compute 7c5ef75d8f1b registry.redhat.io/rhosp-rhel9/openstack-ceilometer-notification:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_notification 88fa57cc1235 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-api:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_api 0f05a58197d5 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-metricd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_metricd 6d806c285500 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-statsd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_statsd 7c02cac34c69 registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cron d3903df545ce registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api db1d33506e3d registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cfn 051446294c70 registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_engine", "openstack metric status --fit-width +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | metricd/processors | ['standalone-80.general.local.0.a94fbf77-1ac0-49ed-bfe2-a89f014fde01', | | | 'standalone-80.general.local.3.28ca78d7-a80e-4515-8060-233360b410eb', | | | 'standalone-80.general.local.1.7e8b5a5b-2ca1-49be-bc22-25f51d67c00a', | | | 'standalone-80.general.local.2.3c4fe59e-23cd-4742-833d-42ff0a4cb692'] | | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/autoscaling_for_instances/assembly-configuring-and-deploying-the-overcloud-for-autoscaling_assembly-configuring-and-deploying-the-overcloud-for-autoscaling
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1]
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] Description ClusterServiceVersion is a Custom Resource of type ClusterServiceVersionSpec . Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. status object ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. 3.1.1. .spec Description ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. Type object Required displayName install Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. apiservicedefinitions object APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. cleanup object Cleanup specifies the cleanup behaviour when the CSV gets deleted customresourcedefinitions object CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. description string displayName string icon array icon[] object install object NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. installModes array InstallModes specify supported installation types installModes[] object InstallMode associates an InstallModeType with a flag representing if the CSV supports it keywords array (string) labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. links array links[] object maintainers array maintainers[] object maturity string minKubeVersion string nativeAPIs array nativeAPIs[] object GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling provider object relatedImages array List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. relatedImages[] object replaces string The name of a CSV this one replaces. Should match the metadata.Name field of the old CSV. selector object Label selector for related resources. skips array (string) The name(s) of one or more CSV(s) that should be skipped in the upgrade graph. Should match the metadata.Name field of the CSV that should be skipped. This field is only used during catalog creation and plays no part in cluster runtime. version string webhookdefinitions array webhookdefinitions[] object WebhookDescription provides details to OLM about required webhooks 3.1.2. .spec.apiservicedefinitions Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Property Type Description owned array owned[] object APIServiceDescription provides details to OLM about apis provided via aggregation required array required[] object APIServiceDescription provides details to OLM about apis provided via aggregation 3.1.3. .spec.apiservicedefinitions.owned Description Type array 3.1.4. .spec.apiservicedefinitions.owned[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a Kubernetes resource type used by a custom resource specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.5. .spec.apiservicedefinitions.owned[].actionDescriptors Description Type array 3.1.6. .spec.apiservicedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.7. .spec.apiservicedefinitions.owned[].resources Description Type array 3.1.8. .spec.apiservicedefinitions.owned[].resources[] Description APIResourceReference is a Kubernetes resource type used by a custom resource Type object Required kind name version Property Type Description kind string name string version string 3.1.9. .spec.apiservicedefinitions.owned[].specDescriptors Description Type array 3.1.10. .spec.apiservicedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.11. .spec.apiservicedefinitions.owned[].statusDescriptors Description Type array 3.1.12. .spec.apiservicedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.13. .spec.apiservicedefinitions.required Description Type array 3.1.14. .spec.apiservicedefinitions.required[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a Kubernetes resource type used by a custom resource specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.15. .spec.apiservicedefinitions.required[].actionDescriptors Description Type array 3.1.16. .spec.apiservicedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.17. .spec.apiservicedefinitions.required[].resources Description Type array 3.1.18. .spec.apiservicedefinitions.required[].resources[] Description APIResourceReference is a Kubernetes resource type used by a custom resource Type object Required kind name version Property Type Description kind string name string version string 3.1.19. .spec.apiservicedefinitions.required[].specDescriptors Description Type array 3.1.20. .spec.apiservicedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.21. .spec.apiservicedefinitions.required[].statusDescriptors Description Type array 3.1.22. .spec.apiservicedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.23. .spec.cleanup Description Cleanup specifies the cleanup behaviour when the CSV gets deleted Type object Required enabled Property Type Description enabled boolean 3.1.24. .spec.customresourcedefinitions Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Property Type Description owned array owned[] object CRDDescription provides details to OLM about the CRDs required array required[] object CRDDescription provides details to OLM about the CRDs 3.1.25. .spec.customresourcedefinitions.owned Description Type array 3.1.26. .spec.customresourcedefinitions.owned[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a Kubernetes resource type used by a custom resource specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.27. .spec.customresourcedefinitions.owned[].actionDescriptors Description Type array 3.1.28. .spec.customresourcedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.29. .spec.customresourcedefinitions.owned[].resources Description Type array 3.1.30. .spec.customresourcedefinitions.owned[].resources[] Description APIResourceReference is a Kubernetes resource type used by a custom resource Type object Required kind name version Property Type Description kind string name string version string 3.1.31. .spec.customresourcedefinitions.owned[].specDescriptors Description Type array 3.1.32. .spec.customresourcedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.33. .spec.customresourcedefinitions.owned[].statusDescriptors Description Type array 3.1.34. .spec.customresourcedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.35. .spec.customresourcedefinitions.required Description Type array 3.1.36. .spec.customresourcedefinitions.required[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a Kubernetes resource type used by a custom resource specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.37. .spec.customresourcedefinitions.required[].actionDescriptors Description Type array 3.1.38. .spec.customresourcedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.39. .spec.customresourcedefinitions.required[].resources Description Type array 3.1.40. .spec.customresourcedefinitions.required[].resources[] Description APIResourceReference is a Kubernetes resource type used by a custom resource Type object Required kind name version Property Type Description kind string name string version string 3.1.41. .spec.customresourcedefinitions.required[].specDescriptors Description Type array 3.1.42. .spec.customresourcedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.43. .spec.customresourcedefinitions.required[].statusDescriptors Description Type array 3.1.44. .spec.customresourcedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.45. .spec.icon Description Type array 3.1.46. .spec.icon[] Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 3.1.47. .spec.install Description NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. Type object Required strategy Property Type Description spec object StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. strategy string 3.1.48. .spec.install.spec Description StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. Type object Required deployments Property Type Description clusterPermissions array clusterPermissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy deployments array deployments[] object StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create permissions array permissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy 3.1.49. .spec.install.spec.clusterPermissions Description Type array 3.1.50. .spec.install.spec.clusterPermissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.51. .spec.install.spec.clusterPermissions[].rules Description Type array 3.1.52. .spec.install.spec.clusterPermissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.53. .spec.install.spec.deployments Description Type array 3.1.54. .spec.install.spec.deployments[] Description StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create Type object Required name spec Property Type Description label object (string) Set is a map of label:value. It implements Labels. name string spec object DeploymentSpec is the specification of the desired behavior of the Deployment. 3.1.55. .spec.install.spec.deployments[].spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector object Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object The deployment strategy to use to replace existing pods with new ones. template object Template describes the pods that will be created. 3.1.56. .spec.install.spec.deployments[].spec.selector Description Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.57. .spec.install.spec.deployments[].spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.58. .spec.install.spec.deployments[].spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.59. .spec.install.spec.deployments[].spec.strategy Description The deployment strategy to use to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 3.1.60. .spec.install.spec.deployments[].spec.strategy.rollingUpdate Description Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. Type object Property Type Description maxSurge integer-or-string The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable integer-or-string The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 3.1.61. .spec.install.spec.deployments[].spec.template Description Template describes the pods that will be created. Type object Property Type Description metadata `` Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 3.1.62. .spec.install.spec.deployments[].spec.template.spec Description Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object If specified, the pod's scheduling constraints automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup overhead integer-or-string Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. securityContext object SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.63. .spec.install.spec.deployments[].spec.template.spec.affinity Description If specified, the pod's scheduling constraints Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 3.1.64. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 3.1.65. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.66. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.67. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.68. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.69. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.70. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.71. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.72. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.73. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.74. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.75. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.76. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.77. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.78. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.79. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.80. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.81. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.82. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.83. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.84. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.85. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.86. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.87. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.88. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.89. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.90. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.91. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.92. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.93. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.94. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.95. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.96. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.97. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.98. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.99. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.100. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.101. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.102. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.103. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.104. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.105. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.106. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.107. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.108. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.109. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.110. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.111. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.112. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.113. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.114. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.115. .spec.install.spec.deployments[].spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.116. .spec.install.spec.deployments[].spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.117. .spec.install.spec.deployments[].spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.118. .spec.install.spec.deployments[].spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.119. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.120. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.121. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.122. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.123. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.124. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.125. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.126. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.127. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.128. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.129. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.130. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.131. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.132. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.134. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.135. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.136. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.137. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.138. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.139. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.140. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.141. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.142. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.143. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.144. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.145. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.146. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.147. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.148. .spec.install.spec.deployments[].spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.149. .spec.install.spec.deployments[].spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.150. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.151. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.152. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.153. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.154. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.155. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.156. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.157. .spec.install.spec.deployments[].spec.template.spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.158. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.159. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.160. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.161. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.162. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.163. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.164. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.165. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.166. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.167. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.168. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.169. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.170. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.171. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.172. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.173. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.174. .spec.install.spec.deployments[].spec.template.spec.dnsConfig Description Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.175. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.176. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.177. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.178. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Lifecycle is not allowed for ephemeral containers. livenessProbe object Probes are not allowed for ephemeral containers. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probes are not allowed for ephemeral containers. resources object Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. securityContext object Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. startupProbe object Probes are not allowed for ephemeral containers. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.179. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.180. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.181. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.182. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.183. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.184. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.185. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.186. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.187. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.188. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.189. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.190. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle is not allowed for ephemeral containers. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.191. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.192. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.193. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.194. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.195. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.196. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.197. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.198. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.199. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.200. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.201. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.202. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.203. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.204. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.205. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.206. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.207. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.208. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.209. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.210. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.211. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.212. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.213. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.214. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.215. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.216. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.217. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.218. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.219. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources Description Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.220. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext Description Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.221. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.222. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.223. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.224. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.225. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.226. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.227. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.228. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.229. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.230. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.231. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.232. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.233. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.234. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.235. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.236. .spec.install.spec.deployments[].spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 3.1.237. .spec.install.spec.deployments[].spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.238. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.239. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.240. .spec.install.spec.deployments[].spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.241. .spec.install.spec.deployments[].spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.242. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.243. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.244. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.245. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.246. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.247. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.248. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.249. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.250. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.251. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.252. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.253. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.254. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.255. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.256. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.257. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.258. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.259. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.260. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.261. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.262. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.263. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.264. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.265. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.266. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.267. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.268. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.269. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.270. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.271. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.272. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.273. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.274. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.275. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.276. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.277. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.278. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.279. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.280. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.281. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.282. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.283. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.284. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.285. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.286. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.287. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.288. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.289. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.290. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.291. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.292. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.293. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 3.1.294. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.295. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.296. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.297. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.298. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.299. .spec.install.spec.deployments[].spec.template.spec.os Description Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.300. .spec.install.spec.deployments[].spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.301. .spec.install.spec.deployments[].spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.302. .spec.install.spec.deployments[].spec.template.spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.303. .spec.install.spec.deployments[].spec.template.spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.304. .spec.install.spec.deployments[].spec.template.spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.305. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.306. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.307. .spec.install.spec.deployments[].spec.template.spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.308. .spec.install.spec.deployments[].spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.309. .spec.install.spec.deployments[].spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.310. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.311. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 3.1.312. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.313. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.314. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.315. .spec.install.spec.deployments[].spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.316. .spec.install.spec.deployments[].spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 3.1.317. .spec.install.spec.deployments[].spec.template.spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.318. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.319. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.320. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.321. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.322. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.323. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.324. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.325. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.326. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.327. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.328. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.329. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.330. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.331. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.332. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.333. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.334. .spec.install.spec.deployments[].spec.template.spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 3.1.335. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.336. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.337. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.338. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.339. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.340. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.341. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.342. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.343. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.344. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.345. .spec.install.spec.deployments[].spec.template.spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.346. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 3.1.347. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.348. .spec.install.spec.deployments[].spec.template.spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.349. .spec.install.spec.deployments[].spec.template.spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.350. .spec.install.spec.deployments[].spec.template.spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.351. .spec.install.spec.deployments[].spec.template.spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.352. .spec.install.spec.deployments[].spec.template.spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.353. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.354. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.355. .spec.install.spec.deployments[].spec.template.spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.356. .spec.install.spec.deployments[].spec.template.spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.357. .spec.install.spec.deployments[].spec.template.spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.358. .spec.install.spec.deployments[].spec.template.spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.359. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.360. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.361. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 3.1.362. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.363. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.364. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.365. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.366. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.367. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.368. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.369. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.370. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 3.1.371. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.372. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.373. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.374. .spec.install.spec.deployments[].spec.template.spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.375. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.376. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.377. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.378. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.379. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.380. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.381. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.382. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.383. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.384. .spec.install.spec.deployments[].spec.template.spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.1.385. .spec.install.spec.permissions Description Type array 3.1.386. .spec.install.spec.permissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.387. .spec.install.spec.permissions[].rules Description Type array 3.1.388. .spec.install.spec.permissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.389. .spec.installModes Description InstallModes specify supported installation types Type array 3.1.390. .spec.installModes[] Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required supported type Property Type Description supported boolean type string InstallModeType is a supported type of install mode for CSV installation 3.1.391. .spec.links Description Type array 3.1.392. .spec.links[] Description Type object Property Type Description name string url string 3.1.393. .spec.maintainers Description Type array 3.1.394. .spec.maintainers[] Description Type object Property Type Description email string name string 3.1.395. .spec.nativeAPIs Description Type array 3.1.396. .spec.nativeAPIs[] Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group kind version Property Type Description group string kind string version string 3.1.397. .spec.provider Description Type object Property Type Description name string url string 3.1.398. .spec.relatedImages Description List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. Type array 3.1.399. .spec.relatedImages[] Description Type object Required image name Property Type Description image string name string 3.1.400. .spec.selector Description Label selector for related resources. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.401. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.402. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.403. .spec.webhookdefinitions Description Type array 3.1.404. .spec.webhookdefinitions[] Description WebhookDescription provides details to OLM about required webhooks Type object Required admissionReviewVersions generateName sideEffects type Property Type Description admissionReviewVersions array (string) containerPort integer conversionCRDs array (string) deploymentName string failurePolicy string FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. generateName string matchPolicy string MatchPolicyType specifies the type of match policy. objectSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. reinvocationPolicy string ReinvocationPolicyType specifies what type of policy the admission hook uses. rules array rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffectClass specifies the types of side effects a webhook may have. targetPort integer-or-string timeoutSeconds integer type string WebhookAdmissionType is the type of admission webhooks supported by OLM webhookPath string 3.1.405. .spec.webhookdefinitions[].objectSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.406. .spec.webhookdefinitions[].objectSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.407. .spec.webhookdefinitions[].objectSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.408. .spec.webhookdefinitions[].rules Description Type array 3.1.409. .spec.webhookdefinitions[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 3.1.410. .status Description ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. Type object Property Type Description certsLastUpdated string Last time the owned APIService certs were updated certsRotateAt string Time the owned APIService certs will rotate cleanup object CleanupStatus represents information about the status of cleanup while a CSV is pending deletion conditions array List of conditions, a history of state transitions conditions[] object Conditions appear in the status as a record of state transitions on the ClusterServiceVersion lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Current condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' requirementStatus array The status of each requirement for this CSV requirementStatus[] object 3.1.411. .status.cleanup Description CleanupStatus represents information about the status of cleanup while a CSV is pending deletion Type object Property Type Description pendingDeletion array PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. pendingDeletion[] object ResourceList represents a list of resources which are of the same Group/Kind 3.1.412. .status.cleanup.pendingDeletion Description PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. Type array 3.1.413. .status.cleanup.pendingDeletion[] Description ResourceList represents a list of resources which are of the same Group/Kind Type object Required group instances kind Property Type Description group string instances array instances[] object kind string 3.1.414. .status.cleanup.pendingDeletion[].instances Description Type array 3.1.415. .status.cleanup.pendingDeletion[].instances[] Description Type object Required name Property Type Description name string namespace string Namespace can be empty for cluster-scoped resources 3.1.416. .status.conditions Description List of conditions, a history of state transitions Type array 3.1.417. .status.conditions[] Description Conditions appear in the status as a record of state transitions on the ClusterServiceVersion Type object Property Type Description lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' 3.1.418. .status.requirementStatus Description The status of each requirement for this CSV Type array 3.1.419. .status.requirementStatus[] Description Type object Required group kind message name status version Property Type Description dependents array dependents[] object DependentStatus is the status for a dependent requirement (to prevent infinite nesting) group string kind string message string name string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.1.420. .status.requirementStatus[].dependents Description Type array 3.1.421. .status.requirementStatus[].dependents[] Description DependentStatus is the status for a dependent requirement (to prevent infinite nesting) Type object Required group kind status version Property Type Description group string kind string message string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/clusterserviceversions GET : list objects of kind ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions DELETE : delete collection of ClusterServiceVersion GET : list objects of kind ClusterServiceVersion POST : create a ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} DELETE : delete a ClusterServiceVersion GET : read the specified ClusterServiceVersion PATCH : partially update the specified ClusterServiceVersion PUT : replace the specified ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status GET : read status of the specified ClusterServiceVersion PATCH : partially update status of the specified ClusterServiceVersion PUT : replace status of the specified ClusterServiceVersion 3.2.1. /apis/operators.coreos.com/v1alpha1/clusterserviceversions Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty 3.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterServiceVersion Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterServiceVersion Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 202 - Accepted ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterServiceVersion Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterServiceVersion Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterServiceVersion Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterServiceVersion Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterServiceVersion Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterServiceVersion Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterServiceVersion Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operatorhub_apis/clusterserviceversion-operators-coreos-com-v1alpha1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/making-open-source-more-inclusive
11.4. Setting up a Kerberos Client for Smart Cards
11.4. Setting up a Kerberos Client for Smart Cards Smart cards can be used with Kerberos, but it requires additional configuration to recognize the X.509 (SSL) user certificates on the smart cards: Install the required PKI/OpenSSL package, along with the other client packages: Edit the /etc/krb5.conf configuration file to add a parameter for the public key infrastructure (PKI) to the [realms] section of the configuration. The pkinit_anchors parameter sets the location of the CA certificate bundle file. Add the PKI module information to the PAM configuration for both smart card authentication ( /etc/pam.d/smartcard-auth ) and system authentication ( /etc/pam.d/system-auth ). The line to be added to both files is as follows: If the OpenSC module does not work as expected, use the module from the coolkey package: /usr/lib64/pkcs11/libcoolkeypk11.so . In this case, consider contacting Red Hat Technical Support or filing a Bugzilla report about the problem.
[ "yum install krb5-pkinit yum install krb5-workstation krb5-libs", "[realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com pkinit_anchors = FILE:/usr/local/example.com.crt }", "auth optional pam_krb5.so use_first_pass no_subsequent_prompt preauth_options=X509_user_identity=PKCS11:/usr/lib64/pkcs11/opensc-pkcs11.so" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/krb-smart-cards
22.6. Retrieving Phase 3 of the Installation Program
22.6. Retrieving Phase 3 of the Installation Program The loader then retrieves phase 3 of the installation program from the network into its RAM disk. This may take some time. Figure 22.8. Retrieving phase 3 of the installation program
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch22s06
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 11 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 11 releases. Note For all the other changes and security fixes, see OpenJDK 11.0.21 Released . 3.1. Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that Red Hat build of OpenJDK 11.0.21 provides: Increased default group size of TLS Diffie-Hellman In Red Hat build of OpenJDK 11.0.21, the JDK implementation of Transport Layer Security (TLS) 1.2 uses a default Diffie-Hellman key size of 2048 bits. This supersedes the behavior in releases where the default Diffie-Hellman key size was 1024 bits. This enhancement is relevant when a TLS_DHE cipher suite is negotiated and either the client or the server does not support Finite Field Diffie-Hellman Ephemeral (FFDHE) parameters. The JDK TLS implementation supports FFDHE, which is enabled by default and can negotiate a stronger key size. As a workaround, you can revert to the key size by setting the jdk.tls.ephemeralDHKeySize system property to 1024 . However, to mitigate risk, consider using the default key size of 2048 bits. Note This change does not affect TLS 1.3, which already uses a minimum Diffie-Hellman key size of 2048 bits. See JDK-8301700 (JDK Bug System) . Server-side cipher suite preferences used by default In Red Hat build of OpenJDK 11.0.21, the SunJSSE provider uses the local server-side cipher suite preferences by default. This supersedes the behavior in releases where the server used the preferences that the connecting client specified. You can revert to the behavior by using SSLParameters.setUseCipherSuitesOrder(false) on the server side. See JDK-8168261 (JDK Bug System) . Support for RSA keys in PKCS#1 format JDK providers can now accept Rivest-Shamir-Adleman (RSA) private and public keys in PKCS#1 format, such as the RSA KeyFactory.impl from the SunRsaSign provider. This feature requires that the RSA private or public key object has a PKCS#1 format and an encoding that matches the ASN.1 syntax for a PKCS#1 RSA private key and public key. See JDK-8023980 (JDK Bug System) . -XshowSettings:locale output includes tzdata version In Red Hat build of OpenJDK 11.0.21, the -XshowSettings launcher option also prints the tzdata version that the JDK uses. The tzdata version is displayed as part of the output for the -XshowSettings:locale option. For example: See JDK-8305950 (JDK Bug System) . Certigna root CA certificate added In Red Hat build of OpenJDK 11.0.21, the cacerts truststore includes the Certigna root certificate: Name: Certigna (Dhimyotis) Alias name: certignarootca Distinguished name: CN=Certigna Root CA, OU=0002 48146308100036, O=Dhimyotis, C=FR See JDK-8314960 (JDK Bug System) . Error thrown if default java.security file fails to load In releases, if the java.security file failed to load successfully, Red Hat build of OpenJDK used a hardcoded set of security properties. However, this set of properties was poorly maintained and it was not obvious to users that the JDK was using these utilities. To address this issue, if the java.security file fails to load successfully, Red Hat build of OpenJDK 11.0.21 throws an InternalError instead. See JDK-8155246 (JDK Bug System) . Arrays cloned in several JAAS callback classes In releases, in the ChoiceCallback and ConfirmationCallback JAAS classes, when arrays were passed into a constructor or returned, these arrays were not cloned. This behavior allowed an external program to gain access to the internal fields of these classes. In Red Hat build of OpenJDK 11.0.21, the JAAS classes return cloned arrays. See JDK-8242330 (JDK Bug System) . 3.2. Red Hat build of OpenJDK deprecated features Review the following release notes to understand pre-existing features that have been either deprecated or removed in Red Hat build of OpenJDK 11.0.21: SECOM Trust Systems root CA1 certificate removed From Red Hat build of OpenJDK 11.0.21 onward, the cacerts truststore no longer includes the SECOM Trust Systems root certificate: Alias name: secomscrootca1 [jdk] Distinguished name: OU=Security Communication RootCA1, O=SECOM Trust.net, C=JP See JDK-8295894 (JDK Bug System) .
[ "Locale settings: default locale = English default display locale = English default format locale = English tzdata version = 2023c" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.21/rn-openjdk11021-features_openjdk
Chapter 2. Security enhancements
Chapter 2. Security enhancements The following sections provide some suggestions to harden the security of your overcloud. 2.1. Using secure root user access The overcloud image automatically contains hardened security for the root user. For example, each deployed overcloud node automatically disables direct SSH access to the root user. You can still access the root user on overcloud nodes. Each overcloud node has a tripleo-admin user account. This user account contains the undercloud public SSH key, which provides SSH access without a password from the undercloud to the overcloud node. Prerequisites You have an installed Red Hat OpenStack Platform director environment. You are logged into the director as stack. Procedure On the undercloud node, log in to the an overcloud node through SSH as the tripleo-admin user. Switch to the root user with sudo -i . 2.2. Adding services to the overcloud firewall When you deploy Red Hat OpenStack Platform, each core service is deployed with a default set of firewall rules on each overcloud node. You can use the ExtraFirewallRules parameter to create rules to open ports for additional services, or create rules to restrict services. Each rule name becomes the comment for the respective iptables rule. Each rule name starts with a three-digit prefix to help Puppet order the rules in the final iptables file. The default Red Hat OpenStack Platform rules use prefixes in the 000 to 200 range. When you create rules for new services, prefix the name with a three-digit number higher than 200. Procedure Use a string to define each rule name under the ExtraFireWallRules parameter. You can use the following parameters under the rule name to define the rule: dport:: The destination port associated to the rule. proto:: The protocol associated to the rule. Defaults to tcp . action:: The action policy associated to the rule. Defaults to accept . source:: The source IP address associated to the rule. The following example shows how to use rules to open additional ports for custom applications: Note When you do not set the action parameter, the result is accept . You can only set the action parameter to drop , insert , or append . Include the ~/templates/firewall.yaml file in the openstack overcloud deloy command. Include all templates that are necessary for your deployment: 2.3. Removing services from the overcloud firewall You can use rules to restrict services. The number that you use in the rule name determines where in iptables the rule will be inserted. The following procedure shows how to restrict the rabbitmq service to the InternalAPI network. Procedure On a Controller node, find the number of the default iptables rule for rabbitmq : [tripleo-admin@overcloud-controller-2 ~]USD sudo iptables -L | grep rabbitmq ACCEPT tcp -- anywhere anywhere multiport dports vtr-emulator,epmd,amqp,25672,25673:25683 state NEW /* 109 rabbitmq-bundle ipv4 */ In an environment file uder parameter_defaults , use the ExtraFirewallRules parameter to restrict rabbitmq to the InternalApi network. The rule is given a lower number thant the default rabbitmq rule number or 109: Note When you do not set the action parameter, the result is accept . You can only set the action parameter to drop , insert , or append . Include the ~/templates/firewall.yaml file in the openstack overcloud deloy command. Include all templates that are necessary for your deployment: 2.4. Changing the Simple Network Management Protocol (SNMP) strings Director provides a default read-only SNMP configuration for your overcloud. It is advisable to change the SNMP strings to mitigate the risk of unauthorized users learning about your network devices. Note When you configure the ExtraConfig interface with a string parameter, you must use the following syntax to ensure that heat and Hiera do not interpret the string as a Boolean value: '"<VALUE>"' . Set the following hieradata using the ExtraConfig hook in an environment file for your overcloud: SNMP traditional access control settings snmp::ro_community IPv4 read-only SNMP community string. The default value is public . snmp::ro_community6 IPv6 read-only SNMP community string. The default value is public . snmp::ro_network Network that is allowed to RO query the daemon. This value can be a string or an array. Default value is 127.0.0.1 . snmp::ro_network6 Network that is allowed to RO query the daemon with IPv6. This value can be a string or an array. The default value is ::1/128 . tripleo::profile::base::snmp::snmpd_config Array of lines to add to the snmpd.conf file as a safety valve. The default value is [] . See the SNMP Configuration File web page for all available options. For example: This changes the read-only SNMP community string on all nodes. SNMP view-based access control settings (VACM) snmp::com2sec An array of VACM com2sec mappings. Must provide SECNAME, SOURCE and COMMUNITY. snmp::com2sec6 An array of VACM com2sec6 mappings. Must provide SECNAME, SOURCE and COMMUNITY. For example: This changes the read-only SNMP community string on all nodes. For more information, see the snmpd.conf man page. 2.5. Using the Open vSwitch firewall You can configure security groups to use the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. Use the NeutronOVSFirewallDriver parameter to specify firewall driver that you want to use: iptables_hybrid - Configures the Networking service (neutron) to use the iptables/hybrid based implementation. openvswitch - Configures the Networking service to use the OVS firewall flow-based driver. The openvswitch firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. Important Multicast traffic is handled differently by the Open vSwitch (OVS) firewall driver than by the iptables firewall driver. With iptables, by default, VRRP traffic is denied, and you must enable VRRP in the security group rules for any VRRP traffic to reach an endpoint. With OVS, all ports share the same OpenFlow context, and multicast traffic cannot be processed individually per port. Because security groups do not apply to all ports (for example, the ports on a router), OVS uses the NORMAL action and forwards multicast traffic to all ports as specified by RFC 4541. Note The iptables_hybrid option is not compatible with OVS-DPDK. The openvswitch option is not compatible with OVS Hardware Offload. Configure the NeutronOVSFirewallDriver parameter in the network-environment.yaml file: NeutronOVSFirewallDriver: openvswitch NeutronOVSFirewallDriver : Configures the name of the firewall driver that you want to use when you implement security groups. Possible values depend on your system configuration. Some examples are noop , openvswitch , and iptables_hybrid . The default value of an empty string results in a supported configuration.
[ "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '300 allow custom application 1': dport: 999 proto: udp '301 allow custom application 2': dport: 8081 proto: tcp EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "[tripleo-admin@overcloud-controller-2 ~]USD sudo iptables -L | grep rabbitmq ACCEPT tcp -- anywhere anywhere multiport dports vtr-emulator,epmd,amqp,25672,25673:25683 state NEW /* 109 rabbitmq-bundle ipv4 */", "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '098 allow rabbit from internalapi network': dport: - 4369 - 5672 - 25672 proto: tcp source: 10.0.0.0/24 '099 drop other rabbit access': dport: - 4369 - 5672 - 25672 proto: tcp action: drop EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "parameter_defaults: ExtraConfig: snmp::ro_community: mysecurestring snmp::ro_community6: myv6securestring", "parameter_defaults: ExtraConfig: snmp::com2sec: [\"notConfigUser default mysecurestring\"] snmp::com2sec6: [\"notConfigUser default myv6securestring\"]", "NeutronOVSFirewallDriver: openvswitch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_security-enhancements_security_and_hardening
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_hibernate_applications/proc_providing-feedback-on-red-hat-documentation_default
Chapter 8. About the installer inventory file
Chapter 8. About the installer inventory file Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. The inventory file can be in one of many formats, depending on the inventory plugins that you have. The most common formats are INI and YAML . Inventory files listed in this document are shown in INI format. The location of the inventory file depends on the installer you used. The following table shows possible locations: Installer Location Bundle tar /ansible-automation-platform-setup-bundle-<latest-version> Non-bundle tar /ansible-automation-platform-setup-<latest-version> RPM /opt/ansible-automation-platform/installer You can verify the hosts in your inventory using the command: ansible all -i <path-to-inventory-file. --list-hosts Example inventory file The first part of the inventory file specifies the hosts or groups that Ansible can work with. 8.1. Guidelines for hosts and groups Databases When using an external database, ensure the [database] sections of your inventory file are properly set up. To improve performance, do not colocate the database and the automation controller on the same server. Automation hub If there is an [automationhub] group, you must include the variables automationhub_pg_host and automationhub_pg_port . Add Ansible automation hub information in the [automationhub] group. Do not install Ansible automation hub and automation controller on the same node. Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub and automation controller from a different node. The FQDN must not contain either the - or the _ symbols, as it will not be processed correctly. Do not use localhost . Private automation hub Do not install private automation hub and automation controller on the same node. You can use the same PostgreSQL (database) instance, but they must use a different (database) name. If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, it can result in an installation you cannot use as a container registry without certificate issues. Important You must separate the installation of automation controller and Ansible automation hub because the [database] group does not distinguish between the two if both are installed at the same time. If you use one value in [database] and both automation controller and Ansible automation hub define it, they would use the same database. Automation controller Automation controller does not configure replication or failover for the database that it uses. automation controller works with any replication that you have. Clustered installations When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. See Deprovisioning nodes or groups . Otherwise, omitted instances or instance groups continue to communicate with the cluster, which can cause issues with automation controller services during the upgrade. If you are creating a clustered installation setup, you must replace [localhost] with the hostname or IP address of all instances. Installers for automation controller, automation hub, and automation services catalog do not accept [localhost] All nodes and instances must be able to reach any others by using this hostname or address. You cannot use the localhost ansible_connection=local on one of the nodes. Use the same format for the host names of all the nodes. Therefore, this does not work: [automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4 Instead, use these formats: [automationhub] hostA hostB hostC or [automationhub] hostA.example.com hostB.example.com hostC.example.com 8.2. Deprovisioning nodes or groups You can deprovision nodes and instance groups using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group. Note You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group. To deprovision nodes, append node_state=deprovision to the node or group within the inventory file. For example: To remove a single node from a deployment: [automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision or To remove an entire instance group from a deployment: [instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision 8.3. Inventory variables The second part of the example inventory file, following [all:vars] , is a list of variables used by the installer. Using all means the variables apply to all hosts. To apply variables to a particular host, use [hostname:vars] . For example, [automationhub:vars] . 8.4. Rules for declaring variables in inventory files The values of string variables are declared in quotes. For example: pg_database='awx' pg_username='awx' pg_password='<password>' When declared in a :vars section, INI values are interpreted as strings. For example, var=FALSE creates a string equal to FALSE . Unlike host lines, :vars sections accept only a single entry per line, so everything after the = must be the value for the entry. Host lines accept multiple key=value parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator. Values that contain whitespace can be quoted (single or double). See the Python shlex parsing rules for details. If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables. Note Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly. If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith#hashsigns as a value for the variable pg_password , declare it as pg_password='"mypasswordwith#hashsigns"' in the Ansible host inventory file. 8.5. Securing secrets in the inventory file You can encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names as well as the variable values makes it hard to find the source of the values. To circumvent this, you can encrypt the variables individually using ansible-vault encrypt_string , or encrypt a file containing the variables. Procedure Create a file labeled credentials.yml to store the encrypted credentials. USD cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw Encrypt the credentials.yml file using ansible-vault . USD ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful Important Store your encrypted vault password in a safe place. Verify that the credentials.yml file is encrypted. USD cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763 Run setup.sh for installation of Ansible Automation Platform 2.3 and pass both credentials.yml and the --ask-vault-pass option . USD ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass 8.6. Additional inventory file variables You can further configure your Red Hat Ansible Automation Platform installation by including additional variables in the inventory file. These configurations add optional features for managing your Red Hat Ansible Automation Platform. Add these variables by editing the inventory file using a text editor. A table of predefined values for inventory file variables can be found in Inventory File Variables in the Red Hat Ansible Automation Platform Installation Guide .
[ "ansible all -i <path-to-inventory-file. --list-hosts", "[automationcontroller] host1.example.com host2.example.com Host4.example.com [automationhub] host3.example.com [database] Host5.example.com [all:vars] admin_password='<password>' pg_host='' pg_port='' pg_database='awx' pg_username='awx' pg_password='<password>' registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>'", "[automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4", "[automationhub] hostA hostB hostC", "[automationhub] hostA.example.com hostB.example.com hostC.example.com", "[automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision", "[instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision", "pg_database='awx' pg_username='awx' pg_password='<password>'", "cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw", "ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful", "cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763", "ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/about_the_installer_inventory_file
7.172. python-nss
7.172. python-nss 7.172.1. RHBA-2015:1324 - python-nss bug fix and enhancement update Updated python-nss packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The python-nss packages provide bindings for Network Security Services (NSS) that allow Python programs to use the NSS cryptographic libraries for SSL/TLS and PKI certificate management. Note The python-nss packages have been upgraded to upstream version 0.16.0, which provides a number of bug fixes and enhancements over the version. (BZ# 1154776 ) Bug Fix BZ# 1154776 Added support for setting trust attributes on a certificate. * Added support for the SSL version range API, information on the SSL cipher suites, and information on the SSL connection. Users of python-nss are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-python-nss
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install AMQ .NET in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To use AMQ .NET on Red Hat Enterprise Linux, you must install the the .NET Core 3.1 developer tools. For information, see the .NET Core 3.1 getting started guide . To build programs using AMQ .NET on Microsoft Windows, you must install Visual Studio. 2.2. Installing on Red Hat Enterprise Linux Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Clients entry in the INTEGRATION AND AUTOMATION category. Click Red Hat AMQ Clients . The Software Downloads page opens. Download the AMQ Clients 2.8.0 .NET Core .zip file. Use the unzip command to extract the file contents into a directory of your choosing. USD unzip amq-clients-2.8.0-dotnet-core.zip When you extract the contents of the .zip file, a directory named amq-clients-2.8.0-dotnet-core is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document. Use a text editor to create the file USDHOME/.nuget/NuGet/NuGet.Config and add the following content: <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3"/> <add key="amq-clients" value=" <install-dir> /nupkg"/> </packageSources> </configuration> If you already have a NuGet.Config file, add the amq-clients line to it. Alternatively, you can move the .nupkg file inside the <install-dir> /nupkg directory to an existing package source location. 2.3. Installing on Microsoft Windows Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Clients entry in the INTEGRATION AND AUTOMATION category. Click Red Hat AMQ Clients . The Software Downloads page opens. Download the AMQ Clients 2.8.0 .NET .zip file. Extract the file contents into a directory of your choosing by right-clicking on the zip file and selecting Extract All . When you extract the contents of the .zip file, a directory named amq-clients-2.8.0-dotnet is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document.
[ "unzip amq-clients-2.8.0-dotnet-core.zip", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <configuration> <packageSources> <add key=\"nuget.org\" value=\"https://api.nuget.org/v3/index.json\" protocolVersion=\"3\"/> <add key=\"amq-clients\" value=\" <install-dir> /nupkg\"/> </packageSources> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_.net_client/installation
Chapter 301. Servlet Component
Chapter 301. Servlet Component Available as of Camel version 2.0 The servlet: component provides HTTP based endpoints for consuming HTTP requests that arrive at a HTTP endpoint that is bound to a published Servlet. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Note Stream Servlet is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be read multiple times. 301.1. URI format servlet://relative_path[?options] You can append query options to the URI in the following format: ?option=value&option=value&... 301.2. Options The Servlet component supports 9 options, which are listed below. Name Description Default Type servletName (consumer) Default name of servlet to use. The default name is CamelServlet. CamelServlet String httpRegistry (consumer) To use a custom org.apache.camel.component.servlet.HttpRegistry. HttpRegistry attachmentMultipart Binding (consumer) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false boolean fileNameExtWhitelist (consumer) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String httpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding httpConfiguration (advanced) To use the shared HttpConfiguration as base configuration. HttpConfiguration allowJavaSerialized Object (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Servlet endpoint is configured using URI syntax: with the following path and query parameters: 301.2.1. Path Parameters (1 parameters): Name Description Default Type contextPath Required The context-path to use String 301.2.2. Query Parameters (22 parameters): Name Description Default Type disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http/http4 producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy httpBinding (common) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding async (consumer) Configure the consumer to work in async mode false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean chunked (consumer) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response true boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean responseBufferSize (consumer) To use a custom buffer size on the javax.servlet.ServletResponse. Integer servletName (consumer) Name of the servlet to use CamelServlet String transferException (consumer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean attachmentMultipartBinding (consumer) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false boolean eagerCheckContentAvailable (consumer) Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern fileNameExtWhitelist (consumer) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String optionsEnabled (consumer) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean traceEnabled (consumer) Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. false boolean mapHttpMessageBody (advanced) If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. true boolean mapHttpMessageFormUrl EncodedBody (advanced) If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. true boolean mapHttpMessageHeaders (advanced) If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 301.3. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.servlet.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.servlet.attachment-multipart-binding Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false Boolean camel.component.servlet.enabled Enable servlet component true Boolean camel.component.servlet.file-name-ext-whitelist Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String camel.component.servlet.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.servlet.http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. String camel.component.servlet.http-configuration To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. String camel.component.servlet.http-registry To use a custom org.apache.camel.component.servlet.HttpRegistry. The option is a org.apache.camel.component.servlet.HttpRegistry type. String camel.component.servlet.mapping.context-path Context path used by the servlet component for automatic mapping. /camel/* String camel.component.servlet.mapping.enabled Enables the automatic mapping of the servlet component into the Spring web context. true Boolean camel.component.servlet.mapping.servlet-name The name of the Camel servlet. CamelServlet String camel.component.servlet.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.servlet.servlet-name Default name of servlet to use. The default name is CamelServlet. CamelServlet String 301.4. Message Headers Camel will apply the same Message Headers as the HTTP component. Camel will also populate all request.parameter and request.headers . For example, if a client request has the URL, http://myserver/myserver?orderid=123 , the exchange will contain a header named orderid with the value 123. 301.5. Usage You can consume only from endpoints generated by the Servlet component. Therefore, it should be used only as input into your Camel routes. To issue HTTP requests against other HTTP endpoints, use the HTTP Component . 301.6. Putting Camel JARs in the app server boot classpath If you put the Camel JARs such as camel-core , camel-servlet , etc. in the boot classpath of your application server (eg usually in its lib directory), then mind that the servlet mapping list is now shared between multiple deployed Camel application in the app server. Mind that putting Camel JARs in the boot classpath of the application server is generally not best practice! So in those situations you must define a custom and unique servlet name in each of your Camel application, eg in the web.xml define: <servlet> <servlet-name>MyServlet</servlet-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MyServlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> And in your Camel endpoints then include the servlet name as well <route> <from uri="servlet://foo?servletName=MyServlet"/> ... </route> From Camel 2.11 onwards Camel will detect this duplicate and fail to start the application. You can control to ignore this duplicate by setting the servlet init-parameter ignoreDuplicateServletName to true as follows: <servlet> <servlet-name>CamelServlet</servlet-name> <display-name>Camel Http Transport Servlet</display-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> <init-param> <param-name>ignoreDuplicateServletName</param-name> <param-value>true</param-value> </init-param> </servlet> But it is strongly advised to use unique servlet-name for each Camel application to avoid this duplication clash, as well any unforeseen side-effects. 301.7. Sample Note From Camel 2.7 onwards it's easier to use Servlet in Spring web applications. See Servlet Tomcat Example for details. In this sample, we define a route that exposes a HTTP service at http://localhost:8080/camel/services/hello . First, you need to publish the CamelHttpTransportServlet through the normal Web Container, or OSGi Service. Use the Web.xml file to publish the CamelHttpTransportServlet as follows: <web-app> <servlet> <servlet-name>CamelServlet</servlet-name> <display-name>Camel Http Transport Servlet</display-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>CamelServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> </web-app> Then you can define your route as follows: from("servlet:hello?matchOnUriPrefix=true").process(new Processor() { public void process(Exchange exchange) throws Exception { String contentType = exchange.getIn().getHeader(Exchange.CONTENT_TYPE, String.class); String path = exchange.getIn().getHeader(Exchange.HTTP_URI, String.class); path = path.substring(path.lastIndexOf("/")); assertEquals("Get a wrong content type", CONTENT_TYPE, contentType); // assert camel http header String charsetEncoding = exchange.getIn().getHeader(Exchange.HTTP_CHARACTER_ENCODING, String.class); assertEquals("Get a wrong charset name from the message heaer", "UTF-8", charsetEncoding); // assert exchange charset assertEquals("Get a wrong charset naem from the exchange property", "UTF-8", exchange.getProperty(Exchange.CHARSET_NAME)); exchange.getOut().setHeader(Exchange.CONTENT_TYPE, contentType + "; charset=UTF-8"); exchange.getOut().setHeader("PATH", path); exchange.getOut().setBody("<b>Hello World</b>"); } }); Note Specify the relative path for camel-servlet endpoint Since we are binding the HTTP transport with a published servlet, and we don't know the servlet's application context path, the camel-servlet endpoint uses the relative path to specify the endpoint's URL. A client can access the camel-servlet endpoint through the servlet publish address: ("http://localhost:8080/camel/services") + RELATIVE_PATH("/hello") 301.7.1. Sample when using Spring 3.x See Servlet Tomcat Example . 301.7.2. Sample when using Spring 2.x When using the Servlet component in a Camel/Spring application it's often required to load the Spring ApplicationContext after the Servlet component has started. This can be accomplished by using Spring's ContextLoaderServlet instead of ContextLoaderListener . In that case you'll need to start ContextLoaderServlet after CamelHttpTransportServlet like this: <web-app> <servlet> <servlet-name>CamelServlet</servlet-name> <servlet-class> org.apache.camel.component.servlet.CamelHttpTransportServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>SpringApplicationContext</servlet-name> <servlet-class> org.springframework.web.context.ContextLoaderServlet </servlet-class> <load-on-startup>2</load-on-startup> </servlet> <web-app> 301.7.3. Sample when using OSGi From Camel 2.6.0 , you can publish the CamelHttpTransportServlet as an OSGi service with Blueprint like this: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <bean id="camelServlet" class="org.apache.camel.component.servlet.CamelHttpTransportServlet" /> <!-- Enlist it in OSGi service registry. This will cause two things: 1) As the pax web whiteboard extender is running the CamelServlet will be registered with the OSGi HTTP Service 2) It will trigger the HttpRegistry in other bundles so the servlet is made known there too --> <service ref="camelServlet"> <interfaces> <value>javax.servlet.Servlet</value> <value>org.apache.camel.http.common.CamelServlet</value> </interfaces> <service-properties> <entry key="alias" value="/camel/services" /> <entry key="matchOnUriPrefix" value="true" /> <entry key="servlet-name" value="CamelServlet" /> </service-properties> </service> </blueprint> Then use this service in your Camel route like this: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <reference id="servletref" ext:proxy-method="classes" interface="org.apache.camel.http.common.CamelServlet"> <reference-listener ref="httpRegistry" bind-method="register" unbind-method="unregister" /> </reference> <bean id="httpRegistry" class="org.apache.camel.component.servlet.DefaultHttpRegistry" /> <bean id="servlet" class="org.apache.camel.component.servlet.ServletComponent"> <property name="httpRegistry" ref="httpRegistry" /> </bean> <bean id="servletProcessor" class="org.apache.camel.example.servlet.ServletProcessor" /> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <!-- Notice how we can use the servlet scheme which is that reference above --> <from uri="servlet://hello" /> <process ref="servletProcessor" /> </route> </camelContext> </blueprint> For versions prior to Camel 2.6 you can use an Activator to publish the CamelHttpTransportServlet on the OSGi platform: import java.util.Dictionary; import java.util.Hashtable; import org.apache.camel.component.servlet.CamelHttpTransportServlet; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference; import org.osgi.service.http.HttpContext; import org.osgi.service.http.HttpService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.osgi.context.BundleContextAware; public final class ServletActivator implements BundleActivator, BundleContextAware { private static final Logger LOG = LoggerFactory.getLogger(ServletActivator.class); private static boolean registerService; /** * HttpService reference. */ private ServiceReference<?> httpServiceRef; /** * Called when the OSGi framework starts our bundle */ public void start(BundleContext bc) throws Exception { registerServlet(bc); } /** * Called when the OSGi framework stops our bundle */ public void stop(BundleContext bc) throws Exception { if (httpServiceRef != null) { bc.ungetService(httpServiceRef); httpServiceRef = null; } } protected void registerServlet(BundleContext bundleContext) throws Exception { httpServiceRef = bundleContext.getServiceReference(HttpService.class.getName()); if (httpServiceRef != null && !registerService) { LOG.info("Register the servlet service"); final HttpService httpService = (HttpService)bundleContext.getService(httpServiceRef); if (httpService != null) { // create a default context to share between registrations final HttpContext httpContext = httpService.createDefaultHttpContext(); // register the hello world servlet final Dictionary<String, String> initParams = new Hashtable<String, String>(); initParams.put("matchOnUriPrefix", "false"); initParams.put("servlet-name", "CamelServlet"); httpService.registerServlet("/camel/services", // alias new CamelHttpTransportServlet(), // register servlet initParams, // init params httpContext // http context ); registerService = true; } } } public void setBundleContext(BundleContext bc) { try { registerServlet(bc); } catch (Exception e) { LOG.error("Cannot register the servlet, the reason is " + e); } } } 301.7.4. Usage with Spring-Boot From Camel 2.19.0 onwards, the camel-servlet-starter library binds automatically all the rest endpoints under the /camel/* context path. The following table summarizes the additional configuration properties available in the camel-servlet-starter library. The automatic mapping of the Camel servlet can also be disabled. Spring-Boot Property Default Description camel.component.servlet.mapping.enabled true Enables the automatic mapping of the servlet component into the Spring web context camel.component.servlet.mapping.context-path /camel/* Context path used by the servlet component for automatic mapping camel.component.servlet.mapping.servlet-name CamelServlet The name of the Camel servlet 301.8. See Also Configuring Camel Component Endpoint Getting Started Servlet Tomcat Example Servlet Tomcat No Spring Example HTTP Jetty 301.9. ServletListener Component Available as of Camel 2.11 This component is used for bootstrapping Camel applications in web applications. For example beforehand people would have to find their own way of bootstrapping Camel, or rely on 3rd party frameworks such as Spring to do it. Note Sidebar This component supports Servlet 2.x onwards, which mean it works also in older web containers; which is the goal of this component. Though Servlet 2.x requires to use a web.xml file as configuration. For Servlet 3.x containers you can use annotation driven configuration to boostrap Camel using the @WebListener, and implement your own class, where you boostrap Camel. Doing this still puts the challenge how to let end users easily configure Camel, which you get for free with the old school web.xml file. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servletlistener</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 301.9.1. Using You would need to chose one of the following implementations of the abstract class org.apache.camel.component.servletlistener.CamelServletContextListener . JndiCamelServletContextListener which uses the JndiRegistry to leverage JNDI for its registry. SimpleCamelServletContextListener which uses the SimpleRegistry to leverage a java.util.Map as its registry. To use this you need to configure the org.apache.camel.component.servletlistener.CamelServletContextListener in the WEB-INF/web.xml file as shown below: 301.9.2. Options The org.apache.camel.component.servletlistener.CamelServletContextListener supports the following options which can be configured as context-param in the web.xml file. Option Type Description propertyPlaceholder.XXX To configure property placeholders in Camel. You should prefix the option with "propertyPlaceholder.", for example to configure the location, use propertyPlaceholder.location as name. You can configure all the options from the Properties component. jmx.XXX To configure JMX. You should prefix the option with "jmx.", for example to disable JMX, use jmx.disabled as name. You can configure all the options from org.apache.camel.spi.ManagementAgent . As well the options mentioned on the JMX page. name String To configure the name of the CamelContext. messageHistory Boolean Camel 2.12.2: Whether to enable or disable Message History (enabled by default). streamCache Boolean Whether to enable Stream caching. trace Boolean Whether to enable Tracer. delayer Long To set a delay value for Delay Interceptor. handleFault Boolean Whether to enable handle fault. errorHandlerRef String Refers to a context scoped Error Handler to be used. autoStartup Boolean Whether to start all routes when starting Camel. useMDCLogging Boolean Whether to use MDC logging. useBreadcrumb Boolean Whether to use breadcrumb. managementNamePattern String To set a custom naming pattern for JMX MBeans. threadNamePattern String To set a custom naming pattern for threads. properties.XXX To set custom properties on CamelContext.getProperties . This is seldom in use. routebuilder.XXX To configure routes to be used. See below for more details. CamelContextLifecycle Refers to a FQN classname of an implementation of org.apache.camel.component.servletlistener.CamelContextLifecycle . Which allows to execute custom code before and after CamelContext has been started or stopped. See below for further details. XXX To set any option on CamelContext. 301.9.3. Examples See Servlet Tomcat No Spring Example . 301.9.4. Accessing the created CamelContext Available as of Camel 2.14/2.13.3/2.12.5 The created CamelContext is stored on the ServletContext as an attribute with the key "CamelContext". You can get hold of the CamelContext if you can get hold of the ServletContext as shown below: ServletContext sc = ... CamelContext camel = (CamelContext) sc.getAttribute("CamelContext"); 301.9.5. Configuring routes You need to configure which routes to use in the web.xml file. You can do this in a number of ways, though all the parameters must be prefixed with "routeBuilder". 301.9.5.1. Using a RouteBuilder class By default Camel will assume the param-value is a FQN classname for a Camel RouteBuilder class, as shown below: <context-param> <param-name>routeBuilder-MyRoute</param-name> <param-value>org.apache.camel.component.servletlistener.MyRoute</param-value> </context-param> You can specify multiple classes in the same param-value as shown below: <context-param> <param-name>routeBuilder-routes</param-name> <!-- we can define multiple values separated by comma --> <param-value> org.apache.camel.component.servletlistener.MyRoute, org.apache.camel.component.servletlistener.routes.BarRouteBuilder </param-value> </context-param> The name of the parameter does not have a meaning at runtime. It just need to be unique and start with "routeBuilder". In the example above we have "routeBuilder-routes". But you could just as well have named it "routeBuilder.foo". 301.9.5.2. Using package scanning You can also tell Camel to use package scanning, which mean it will look in the given package for all classes of RouteBuilder types and automatic adding them as Camel routes. To do that you need to prefix the value with "packagescan:" as shown below: <context-param> <param-name>routeBuilder-MyRoute</param-name> <!-- define the routes using package scanning by prefixing with packagescan: --> <param-value>packagescan:org.apache.camel.component.servletlistener.routes</param-value> </context-param> 301.9.5.3. Using a XML file You can also define Camel routes using XML DSL, though as we are not using Spring or Blueprint the XML file can only contain Camel route(s). In the web.xml you refer to the XML file which can be from "classpath", "file" or a "http" url, as shown below: <context-param> <param-name>routeBuilder-MyRoute</param-name> <param-value>classpath:routes/myRoutes.xml</param-value> </context-param> And the XML file is: routes/myRoutes.xml <?xml version="1.0" encoding="UTF-8"?> <!-- the xmlns="http://camel.apache.org/schema/spring" is needed --> <routes xmlns="http://camel.apache.org/schema/spring"> <route id="foo"> <from uri="direct:foo"/> <to uri="mock:foo"/> </route> <route id="bar"> <from uri="direct:bar"/> <to uri="mock:bar"/> </route> </routes> Notice that in the XML file the root tag is <routes> which must use the namespace "http://camel.apache.org/schema/spring". This namespace is having the spring in the name, but that is because of historical reasons, as Spring was the first and only XML DSL back in the time. At runtime no Spring JARs is needed. Maybe in Camel 3.0 the namespace can be renamed to a generic name. 301.9.5.4. Configuring propert placeholders Here is a snippet of a web.xml configuration for setting up property placeholders to load myproperties.properties from the classpath <!-- setup property placeholder to load properties from classpath --> <!-- we do this by setting the param-name with propertyPlaceholder. as prefix and then any options such as location, cache etc --> <context-param> <param-name>propertyPlaceholder.location</param-name> <param-value>classpath:myproperties.properties</param-value> </context-param> <!-- for example to disable cache on properties component, you do --> <context-param> <param-name>propertyPlaceholder.cache</param-name> <param-value>false</param-value> </context-param> 301.9.5.5. Configuring JMX Here is a snippet of a web.xml configuration for configuring JMX, such as disabling JMX. <!-- configure JMX by using names that is prefixed with jmx. --> <!-- in this example we disable JMX --> <context-param> <param-name>jmx.disabled</param-name> <param-value>true</param-value> </context-param> JNDI or Simple as Camel Registry ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ This component uses either JNDI or Simple as the Registry. This allows you to lookup Bean s and other services in JNDI, and as well to bind and unbind your own Bean s. This is done from Java code by implementing the org.apache.camel.component.servletlistener.CamelContextLifecycle . 301.9.5.6. Using custom CamelContextLifecycle In the code below we use the callbacks beforeStart and afterStop to enlist our custom bean in the Simple Registry, and as well to cleanup when we stop. Then we need to register this class in the web.xml file as shown below, using the parameter name "CamelContextLifecycle". The value must be a FQN which refers to the class implementing the org.apache.camel.component.servletlistener.CamelContextLifecycle interface. <context-param> <param-name>CamelContextLifecycle</param-name> <param-value>org.apache.camel.component.servletlistener.MyLifecycle</param-value> </context-param> As we enlisted our HelloBean Bean using the name "myBean" we can refer to this Bean in the Camel routes as shown below: public class MyBeanRoute extends RouteBuilder { @Override public void configure() throws Exception { from("seda:foo").routeId("foo") .to("bean:myBean") .to("mock:foo"); } } Important: If you use org.apache.camel.component.servletlistener.JndiCamelServletContextListener then the CamelContextLifecycle must use the JndiRegistry as well. And likewise if the servlet is org.apache.camel.component.servletlistener.SimpleCamelServletContextListener then the CamelContextLifecycle must use the SimpleRegistry 301.9.6. See Also SERVLET Servlet Tomcat Example Servlet Tomcat No Spring Example
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "servlet://relative_path[?options]", "servlet:contextPath", "<servlet> <servlet-name>MyServlet</servlet-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MyServlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping>", "<route> <from uri=\"servlet://foo?servletName=MyServlet\"/> </route>", "<servlet> <servlet-name>CamelServlet</servlet-name> <display-name>Camel Http Transport Servlet</display-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> <init-param> <param-name>ignoreDuplicateServletName</param-name> <param-value>true</param-value> </init-param> </servlet>", "<web-app> <servlet> <servlet-name>CamelServlet</servlet-name> <display-name>Camel Http Transport Servlet</display-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>CamelServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> </web-app>", "from(\"servlet:hello?matchOnUriPrefix=true\").process(new Processor() { public void process(Exchange exchange) throws Exception { String contentType = exchange.getIn().getHeader(Exchange.CONTENT_TYPE, String.class); String path = exchange.getIn().getHeader(Exchange.HTTP_URI, String.class); path = path.substring(path.lastIndexOf(\"/\")); assertEquals(\"Get a wrong content type\", CONTENT_TYPE, contentType); // assert camel http header String charsetEncoding = exchange.getIn().getHeader(Exchange.HTTP_CHARACTER_ENCODING, String.class); assertEquals(\"Get a wrong charset name from the message heaer\", \"UTF-8\", charsetEncoding); // assert exchange charset assertEquals(\"Get a wrong charset naem from the exchange property\", \"UTF-8\", exchange.getProperty(Exchange.CHARSET_NAME)); exchange.getOut().setHeader(Exchange.CONTENT_TYPE, contentType + \"; charset=UTF-8\"); exchange.getOut().setHeader(\"PATH\", path); exchange.getOut().setBody(\"<b>Hello World</b>\"); } });", "<web-app> <servlet> <servlet-name>CamelServlet</servlet-name> <servlet-class> org.apache.camel.component.servlet.CamelHttpTransportServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>SpringApplicationContext</servlet-name> <servlet-class> org.springframework.web.context.ContextLoaderServlet </servlet-class> <load-on-startup>2</load-on-startup> </servlet> <web-app>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <bean id=\"camelServlet\" class=\"org.apache.camel.component.servlet.CamelHttpTransportServlet\" /> <!-- Enlist it in OSGi service registry. This will cause two things: 1) As the pax web whiteboard extender is running the CamelServlet will be registered with the OSGi HTTP Service 2) It will trigger the HttpRegistry in other bundles so the servlet is made known there too --> <service ref=\"camelServlet\"> <interfaces> <value>javax.servlet.Servlet</value> <value>org.apache.camel.http.common.CamelServlet</value> </interfaces> <service-properties> <entry key=\"alias\" value=\"/camel/services\" /> <entry key=\"matchOnUriPrefix\" value=\"true\" /> <entry key=\"servlet-name\" value=\"CamelServlet\" /> </service-properties> </service> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <reference id=\"servletref\" ext:proxy-method=\"classes\" interface=\"org.apache.camel.http.common.CamelServlet\"> <reference-listener ref=\"httpRegistry\" bind-method=\"register\" unbind-method=\"unregister\" /> </reference> <bean id=\"httpRegistry\" class=\"org.apache.camel.component.servlet.DefaultHttpRegistry\" /> <bean id=\"servlet\" class=\"org.apache.camel.component.servlet.ServletComponent\"> <property name=\"httpRegistry\" ref=\"httpRegistry\" /> </bean> <bean id=\"servletProcessor\" class=\"org.apache.camel.example.servlet.ServletProcessor\" /> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <!-- Notice how we can use the servlet scheme which is that reference above --> <from uri=\"servlet://hello\" /> <process ref=\"servletProcessor\" /> </route> </camelContext> </blueprint>", "import java.util.Dictionary; import java.util.Hashtable; import org.apache.camel.component.servlet.CamelHttpTransportServlet; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference; import org.osgi.service.http.HttpContext; import org.osgi.service.http.HttpService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.osgi.context.BundleContextAware; public final class ServletActivator implements BundleActivator, BundleContextAware { private static final Logger LOG = LoggerFactory.getLogger(ServletActivator.class); private static boolean registerService; /** * HttpService reference. */ private ServiceReference<?> httpServiceRef; /** * Called when the OSGi framework starts our bundle */ public void start(BundleContext bc) throws Exception { registerServlet(bc); } /** * Called when the OSGi framework stops our bundle */ public void stop(BundleContext bc) throws Exception { if (httpServiceRef != null) { bc.ungetService(httpServiceRef); httpServiceRef = null; } } protected void registerServlet(BundleContext bundleContext) throws Exception { httpServiceRef = bundleContext.getServiceReference(HttpService.class.getName()); if (httpServiceRef != null && !registerService) { LOG.info(\"Register the servlet service\"); final HttpService httpService = (HttpService)bundleContext.getService(httpServiceRef); if (httpService != null) { // create a default context to share between registrations final HttpContext httpContext = httpService.createDefaultHttpContext(); // register the hello world servlet final Dictionary<String, String> initParams = new Hashtable<String, String>(); initParams.put(\"matchOnUriPrefix\", \"false\"); initParams.put(\"servlet-name\", \"CamelServlet\"); httpService.registerServlet(\"/camel/services\", // alias new CamelHttpTransportServlet(), // register servlet initParams, // init params httpContext // http context ); registerService = true; } } } public void setBundleContext(BundleContext bc) { try { registerServlet(bc); } catch (Exception e) { LOG.error(\"Cannot register the servlet, the reason is \" + e); } } }", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servletlistener</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ServletContext sc = CamelContext camel = (CamelContext) sc.getAttribute(\"CamelContext\");", "<context-param> <param-name>routeBuilder-MyRoute</param-name> <param-value>org.apache.camel.component.servletlistener.MyRoute</param-value> </context-param>", "<context-param> <param-name>routeBuilder-routes</param-name> <!-- we can define multiple values separated by comma --> <param-value> org.apache.camel.component.servletlistener.MyRoute, org.apache.camel.component.servletlistener.routes.BarRouteBuilder </param-value> </context-param>", "<context-param> <param-name>routeBuilder-MyRoute</param-name> <!-- define the routes using package scanning by prefixing with packagescan: --> <param-value>packagescan:org.apache.camel.component.servletlistener.routes</param-value> </context-param>", "<context-param> <param-name>routeBuilder-MyRoute</param-name> <param-value>classpath:routes/myRoutes.xml</param-value> </context-param>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <!-- the xmlns=\"http://camel.apache.org/schema/spring\" is needed --> <routes xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"foo\"> <from uri=\"direct:foo\"/> <to uri=\"mock:foo\"/> </route> <route id=\"bar\"> <from uri=\"direct:bar\"/> <to uri=\"mock:bar\"/> </route> </routes>", "<!-- setup property placeholder to load properties from classpath --> <!-- we do this by setting the param-name with propertyPlaceholder. as prefix and then any options such as location, cache etc --> <context-param> <param-name>propertyPlaceholder.location</param-name> <param-value>classpath:myproperties.properties</param-value> </context-param> <!-- for example to disable cache on properties component, you do --> <context-param> <param-name>propertyPlaceholder.cache</param-name> <param-value>false</param-value> </context-param>", "<!-- configure JMX by using names that is prefixed with jmx. --> <!-- in this example we disable JMX --> <context-param> <param-name>jmx.disabled</param-name> <param-value>true</param-value> </context-param>", "<context-param> <param-name>CamelContextLifecycle</param-name> <param-value>org.apache.camel.component.servletlistener.MyLifecycle</param-value> </context-param>", "public class MyBeanRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"seda:foo\").routeId(\"foo\") .to(\"bean:myBean\") .to(\"mock:foo\"); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/servlet-component
Chapter 4. Knative CLI for use with OpenShift Serverless
Chapter 4. Knative CLI for use with OpenShift Serverless The Knative ( kn ) CLI enables simple interaction with Knative components on OpenShift Container Platform. 4.1. Key features The Knative ( kn ) CLI is designed to make serverless computing tasks simple and concise. Key features of the Knative CLI include: Deploy serverless applications from the command line. Manage features of Knative Serving, such as services, revisions, and traffic-splitting. Create and manage Knative Eventing components, such as event sources and triggers. Create sink bindings to connect existing Kubernetes applications and Knative services. Extend the Knative CLI with flexible plugin architecture, similar to the kubectl CLI. Configure autoscaling parameters for Knative services. Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies. 4.2. Installing the Knative CLI See Installing the Knative CLI .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cli_tools/kn-cli-tools
1.8. Making the Kickstart File Available
1.8. Making the Kickstart File Available A kickstart file must be placed in one of the following locations: On a boot diskette On a boot CD-ROM On a network Normally a kickstart file is copied to the boot diskette, or made available on the network. The network-based approach is most commonly used, as most kickstart installations tend to be performed on networked computers. Let us take a more in-depth look at where the kickstart file may be placed. 1.8.1. Creating Kickstart Boot Media Diskette-based booting is no longer supported in Red Hat Enterprise Linux. Installations must use CD-ROM or flash memory products for booting. However, the kickstart file may still reside on a diskette's top-level directory, and must be named ks.cfg . To perform a CD-ROM-based kickstart installation, the kickstart file must be named ks.cfg and must be located in the boot CD-ROM's top-level directory. Since a CD-ROM is read-only, the file must be added to the directory used to create the image that is written to the CD-ROM. Refer to the Installation Guide for instructions on creating boot media; however, before making the file.iso image file, copy the ks.cfg kickstart file to the isolinux/ directory. To perform a pen-based flash memory kickstart installation, the kickstart file must be named ks.cfg and must be located in the flash memory's top-level directory. Create the boot image first, and then copy the ks.cfg file. For example, the following transfers a boot image to the pen drive ( /dev/sda ) using the dd command: Note Creation of USB flash memory pen drives for booting is possible, but is heavily dependent on system hardware BIOS settings. Refer to your hardware manufacturer to see if your system supports booting to alternate devices.
[ "dd if=diskboot.img of=/dev/sda bs=1M" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-making_the_kickstart_file_available
Chapter 4. Example applications with Red Hat build of Kogito microservices
Chapter 4. Example applications with Red Hat build of Kogito microservices Red Hat build of Kogito microservices include example applications in the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. These example applications contain various types of services on Red Hat build of Quarkus or Spring Boot to help you develop your own applications. The services use one or more Decision Model and Notation (DMN) decision models, Drools Rule Language (DRL) rule units, Predictive Model Markup Language (PMML) models, or Java classes to define the service logic. For information about each example application and instructions for using them, see the README file in the relevant application folder. Note When you run examples in a local environment, ensure that the environment matches the requirements that are listed in the README file of the relevant application folder. Also, this might require making the necessary network ports available, as configured for Red Hat build of Quarkus, Spring Boot, and docker-compose where applicable. The following list describes some of the examples provided with Red Hat build of Kogito microservices: Note These quick start examples showcase a supported setup. Other quickstarts not listed might use technology that is provided by the upstream community only and therefore not fully supported by Red Hat. Decision services dmn-quarkus-example and dmn-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DMN to determine driver penalty and suspension based on traffic violations. rules-quarkus-helloworld : A Hello World decision service on Red Hat build of Quarkus with a single DRL rule unit. ruleunit-quarkus-example and ruleunit-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DRL with rule units to validate a loan application and that exposes REST operations to view application status. dmn-pmml-quarkus-example and dmn-pmml-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DMN and PMML to determine driver penalty and suspension based on traffic violations. dmn-drools-quarkus-metrics and dmn-drools-springboot-metrics : A decision service (on Red Hat build of Quarkus or Spring Boot) that enables and consumes the runtime metrics monitoring feature in Red Hat build of Kogito. pmml-quarkus-example and pmml-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses PMML. For more information, see Designing a decision service using DMN models , Designing a decision service using DRL rules , and Designing a decision service using PMML models .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/ref-kogito-microservices-app-examples_getting-started-kogito-microservices
Chapter 1. Introduction to Hammer
Chapter 1. Introduction to Hammer Hammer is a powerful command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Satellite Server either through CLI commands or automation in shell scripts. Hammer also provides an interactive shell. 1.1. Hammer compared to Satellite web UI Compared to navigating the Satellite web UI, using Hammer can result in much faster interaction with the Satellite Server, as common shell features such as environment variables and aliases are at your disposal. You can also incorporate Hammer commands into reusable scripts for automating tasks of various complexity. Output from Hammer commands can be redirected to other tools, which allows for integration with your existing environment. You can issue Hammer commands directly on the base operating system running Red Hat Satellite. Access to base operating system on Satellite Server is required to issue Hammer commands, which can limit the number of potential users compared to the Satellite web UI. Although the parity between Hammer and the Satellite web UI is almost complete, the Satellite web UI has development priority and can be ahead especially for newly introduced features. 1.2. Hammer compared to Satellite API For many tasks, both Hammer and Satellite API are equally applicable. Hammer can be used as a human friendly interface to Satellite API, for example to test responses to API calls before applying them in a script (use the -d option to inspect API calls issued by Hammer, for example hammer -d organization list ). Changes in the API are automatically reflected in Hammer, while scripts using the API directly have to be updated manually. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, a script communicating directly with the API establishes the binding only once. For more information, see the Using the Satellite REST API . 1.3. Getting help View the full list of hammer options and subcommands by executing: Use --help to inspect any subcommand, for example: You can search the help output using grep , or redirect it to a text viewer, for example:
[ "hammer --help", "hammer organization --help", "hammer | less" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/introduction-to-hammer
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/snip-conscious-language_getting-started-kogito
Business continuity
Business continuity Red Hat Advanced Cluster Management for Kubernetes 2.11 Business continuity
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/business_continuity/index
Part IV. Manage
Part IV. Manage
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-advanced_administration
Chapter 17. Web Servers and Services
Chapter 17. Web Servers and Services Apache HTTP Server 2.4 Version 2.4 of the Apache HTTP Server ( httpd ) is included in Red Hat Enterprise Linux 7, and offers a range of new features: an enhanced version of the "Event" processing module, improving asynchronous request process and performance; native FastCGI support in the mod_proxy module; support for embedded scripting using the Lua language. More information about the features and changes in httpd 2.4 can be found at http://httpd.apache.org/docs/2.4/new_features_2_4.html . A guide to adapting configuration files is also available: http://httpd.apache.org/docs/2.4/upgrading.html . MariaDB 5.5 MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features; for example, a non-blocking client API library, the Aria and XtraDB storage engines with enhanced performance, better server status variables, and enhanced replication. Detailed information about MariaDB can be found at https://mariadb.com/kb/en/what-is-mariadb-55/ . PostgreSQL 9.2 PostgreSQL is an advanced Object-Relational database management system (DBMS). The postgresql packages include the PostgreSQL server package, client programs, and libraries needed to access a PostgreSQL DBMS server. Red Hat Enterprise Linux 7 features version 9.2 of PostgreSQL. For a list of new features, bug fixes and possible incompatibilities against version 8.4 packaged in Red Hat Enterprise Linux 6, please refer to the upstream release notes: http://www.postgresql.org/docs/9.2/static/release-9-0.html http://www.postgresql.org/docs/9.2/static/release-9-1.html http://www.postgresql.org/docs/9.2/static/release-9-2.html Or the PostgreSQL wiki pages: http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.0 http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.1 http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-web_servers_and_services
Chapter 13. Configuring distributed virtual routing (DVR)
Chapter 13. Configuring distributed virtual routing (DVR) 13.1. Understanding distributed virtual routing (DVR) When you deploy Red Hat OpenStack Platform you can choose between a centralized routing model or DVR. Each model has advantages and disadvantages. Use this document to carefully plan whether centralized routing or DVR better suits your needs. New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). DVR is disabled by default in ML2/OVS deployments. 13.1.1. Overview of Layer 3 routing The Red Hat OpenStack Platform Networking service (neutron) provides routing services for project networks. Without a router, VM instances in a project network can communicate with other instances over a shared L2 broadcast domain. Creating a router and assigning it to a project network allows the instances in that network to communicate with other project networks or upstream (if an external gateway is defined for the router). 13.1.2. Routing flows Routing services in Red Hat OpenStack Platform (RHOSP) can be categorized into three main flows: East-West routing - routing of traffic between different networks in the same project. This traffic does not leave the RHOSP deployment. This definition applies to both IPv4 and IPv6 subnets. North-South routing with floating IPs - Floating IP addressing is a one-to-one network address translation (NAT) that can be modified and that floats between VM instances. While floating IPs are modeled as a one-to-one association between the floating IP and a Networking service (neutron) port, they are implemented by association with a Networking service router that performs the NAT translation. The floating IPs themselves are taken from the uplink network that provides the router with external connectivity. As a result, instances can communicate with external resources (such as endpoints on the internet) or the other way around. Floating IPs are an IPv4 concept and do not apply to IPv6. It is assumed that the IPv6 addressing used by projects uses Global Unicast Addresses (GUAs) with no overlap across the projects, and therefore can be routed without NAT. North-South routing without floating IPs (also known as SNAT ) - The Networking service offers a default port address translation (PAT) service for instances that do not have allocated floating IPs. With this service, instances can communicate with external endpoints through the router, but not the other way around. For example, an instance can browse a website on the internet, but a web browser outside cannot browse a website hosted within the instance. SNAT is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs prefixes do not require NAT on the Networking service router external gateway port to access the outside world. 13.1.3. Centralized routing Originally, the Networking service (neutron) was designed with a centralized routing model where a project's virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or cluster of nodes (referred to as the Network node, or Controller node). This means that each time a routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For example: Traffic between instances flows through a Controller node - when two instances need to communicate with each other using L3, traffic has to hit the Controller node. Even if the instances are scheduled on the same Compute node, traffic still has to leave the Compute node, flow through the Controller, and route back to the Compute node. This negatively impacts performance. Instances with floating IPs receive and send packets through the Controller node - the external network gateway interface is available only at the Controller node, so whether the traffic is originating from an instance, or destined to an instance from the external network, it has to flow through the Controller node. Consequently, in large environments the Controller node is subject to heavy traffic load. This would affect performance and scalability, and also requires careful planning to accommodate enough bandwidth in the external network gateway interface. The same requirement applies for SNAT traffic. To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will failover to a standby on another node and there will be packet loss until the HA router failover completes. 13.2. DVR overview Distributed Virtual Routing (DVR) offers an alternative routing design. DVR isolates the failure domain of the Controller node and optimizes network traffic by deploying the L3 agent and schedule routers on every Compute node. DVR has these characteristics: East-West traffic is routed directly on the Compute nodes in a distributed fashion. North-South traffic with floating IP is distributed and routed on the Compute nodes. This requires the external network to be connected to every Compute node. North-South traffic without floating IP is not distributed and still requires a dedicated Controller node. The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only SNAT traffic. The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata proxy service is hosted on all the distributed routers. 13.3. DVR known issues and caveats Support for DVR is limited to the ML2 core plug-in and the Open vSwitch (OVS) mechanism driver or ML2/OVN mechanism driver. Other back ends are not supported. On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-balancing service (octavia) goes through the Controller and network nodes, instead of the compute nodes. With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs. However, the IP address assigned to a bound port using allowed_address_pairs , should match the virtual port IP address (/32). If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to reach the bound IP port. SNAT (source network address translation) traffic is not distributed, even when DVR is enabled. SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller node. In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing extensively with ML2/OVS, do not use DVR. Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south traffic is distributed when DVR is configured. In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with Red Hat OpenStack Platform 17.1 director, L3 HA is disabled. This means that routers are still scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The allow_automatic_l3agent_failover feature is recommended in such cases, so that if one network node fails, the routers are rescheduled to a different node. For ML2/OVS environments, the DHCP server is not distributed and is deployed on a Controller node. The ML2/OVS neutron DCHP agent, which manages the DHCP server, is deployed in a highly available configuration on the Controller nodes, regardless of the routing design (centralized or DVR). Compute nodes require an interface on the external network attached to an external bridge. They use this interface to attach to a VLAN or flat network for an external router gateway, to host floating IPs, and to perform SNAT for VMs that use floating IPs. In ML2/OVS deployments, each Compute node requires one additional IP address. This is due to the implementation of the external gateway port and the floating IP network namespace. VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director enforces L2 Population during installation. 13.4. Supported routing architectures Red Hat OpenStack Platform (RHOSP) supports both centralized, high-availability (HA) routing and distributed virtual routing (DVR) in the RHOSP versions listed: RHOSP centralized HA routing support began in RHOSP 8. RHOSP distributed routing support began in RHOSP 12. 13.5. Migrating centralized routers to distributed routing This section contains information about upgrading to distributed routing for Red Hat OpenStack Platform deployments that use L3 HA centralized routing. Procedure Upgrade your deployment and validate that it is working correctly. Run the director stack update to configure DVR. Confirm that routing functions correctly through the existing routers. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable the L3 HA option, and then enable the distributed option: Disable the router: Example Clear high availability: Example Configure the router to use DVR: Example Enable the router: Example Confirm that distributed routing functions correctly. Additional resources Deploying DVR with ML2 OVS 13.6. Deploying ML2/OVN OpenStack with distributed virtual routing (DVR) disabled New Red Hat OpenStack Platform (RHOSP) deployments default to the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) and DVR. In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine instances and the network that provides the router with external connectivity (north-south traffic). Traffic between instances (east-west traffic) is also distributed. You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south traffic to traverse a controller or networker node. East-west routing is always distributed in an an ML2/OVN deployment, even when DVR is disabled. Prerequisites RHOSP 17.1 distribution ready for customization and deployment. Procedure Create a custom environment file, and add the following configuration: To apply this configuration, deploy the overcloud, adding your custom environment file to the stack along with your other environment files. For example: 13.6.1. Additional resources Understanding distributed virtual routing (DVR) in the Configuring Red Hat OpenStack Platform networking guide.
[ "openstack router set --disable router1", "openstack router set --no-ha router1", "openstack router set --distributed router1", "openstack router set --enable router1", "parameter_defaults: NeutronEnableDVR: false", "(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<custom-environment-file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/config-dvr_rhosp-network
Authorization
Authorization Red Hat Developer Hub 1.4 Configuring authorization by using role based access control (RBAC) in Red Hat Developer Hub Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authorization/index
Chapter 4. Configuration
Chapter 4. Configuration This chapter describes the process for binding the Red Hat build of Apache Qpid JMS implementation to your JMS application and setting configuration options. JMS uses the Java Naming Directory Interface (JNDI) to register and look up API implementations and other resources. This enables you to write code to the JMS API without tying it to a particular implementation. Configuration options are exposed as query parameters on the connection URI. 4.1. Configuring the JNDI initial context JMS applications use a JNDI InitialContext object obtained from an InitialContextFactory to look up JMS objects such as the connection factory. Red Hat build of Apache Qpid JMS provides an implementation of the InitialContextFactory in the org.apache.qpid.jms.jndi.JmsInitialContextFactory class. The InitialContextFactory implementation is discovered when the InitialContext object is instantiated: javax.naming.Context context = new javax.naming.InitialContext(); To find an implementation, JNDI must be configured in your environment. There are three ways of achieving this: using a jndi.properties file, using a system property, or using the initial context API. Using a jndi.properties file Create a file named jndi.properties and place it on the Java classpath. Add a property with the key java.naming.factory.initial . Example: Setting the JNDI initial context factory using a jndi.properties file java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory In Maven-based projects, the jndi.properties file is placed in the <project-dir> /src/main/resources directory. Using a system property Set the java.naming.factory.initial system property. Example: Setting the JNDI initial context factory using a system property USD java -Djava.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory ... Using the initial context API Use the JNDI initial context API to set properties programatically. Example: Setting JNDI properties programatically Hashtable<Object, Object> env = new Hashtable<>(); env.put("java.naming.factory.initial", "org.apache.qpid.jms.jndi.JmsInitialContextFactory"); InitialContext context = new InitialContext(env); Note that you can use the same API to set the JNDI properties for connection factories, queues, and topics. 4.2. Configuring the connection factory The JMS connection factory is the entry point for creating connections. It uses a connection URI that encodes your application-specific configuration settings. To set the factory name and connection URI, create a property in the format below. You can store this configuration in a jndi.properties file or set the corresponding system property. The JNDI property format for connection factories connectionFactory. <lookup-name> = <connection-uri> For example, this is how you might configure a factory named app1 : Example: Setting the connection factory in a jndi.properties file connectionFactory.app1 = amqp://example.net:5672?jms.clientID=backend You can then use the JNDI context to look up your configured connection factory using the name app1 : ConnectionFactory factory = (ConnectionFactory) context.lookup("app1"); 4.3. Connection URIs Connections are configured using a connection URI. The connection URI specifies the remote host, port, and a set of configuration options, which are set as query parameters. For more information about the available options, see Chapter 5, Configuration options . The connection URI format The scheme is amqp for unencrypted connections and amqps for SSL/TLS connections. For example, the following is a connection URI that connects to host example.net at port 5672 and sets the client ID to backend : Example: A connection URI Failover URIs When failover is configured, the client can reconnect to another server automatically if the connection to the current server is lost. Failover URIs have the prefix failover: and contain a comma-separated list of connection URIs inside parentheses. Additional options are specified at the end. The failover URI format For example, the following is a failover URI that can connect to either of two hosts, host1 or host2 : Example: A failover URI As with the connection URI example, the client can be configured with a number of different settings using the URI in a failover configuration. These settings are detailed in Chapter 5, Configuration options , with the Section 5.5, "Failover options" section being of particular interest. SSL/TLS Server Name Indication When the amqps scheme is used to specify an SSL/TLS connection, the host segment from the URI can be used by the JVM's TLS Server Name Indication (SNI) extension to communicate the desired server hostname during a TLS handshake. The SNI extension is automatically included if a fully qualified domain name (for example, "myhost.mydomain") is specified, but not when an unqualified name (for example, "myhost") or a bare IP address is used. 4.4. Configuring queue and topic names JMS provides the option of using JNDI to look up deployment-specific queue and topic resources. To set queue and topic names in JNDI, create properties in the following format. Either place this configuration in a jndi.properties file or set corresponding system properties. The JNDI property format for queues and topics queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name> For example, the following properties define the names jobs and notifications for two deployment-specific resources: Example: Setting queue and topic names in a jndi.properties file queue.jobs = app1/work-items topic.notifications = app1/updates You can then look up the resources by their JNDI names: Queue queue = (Queue) context.lookup("jobs"); Topic topic = (Topic) context.lookup("notifications"); 4.5. Variable expansion in JNDI properties JNDI property values can contain variables of the form USD{ <variable-name> } . The library resolves the variable value by searching in order in the following locations: Java system properties OS environment variables The JNDI properties file or environment Hashtable For example, on Linux USD{HOME} resolves to the HOME environment variable, the current user's home directory. A default value can be supplied using the syntax USD{ <variable-name> :- <default-value> } . If no value for <variable-name> is found, the default value is used instead.
[ "javax.naming.Context context = new javax.naming.InitialContext();", "java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory", "java -Djava.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory", "Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.qpid.jms.jndi.JmsInitialContextFactory\"); InitialContext context = new InitialContext(env);", "connectionFactory. <lookup-name> = <connection-uri>", "connectionFactory.app1 = amqp://example.net:5672?jms.clientID=backend", "ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");", "<scheme>://<host>:<port>[?<option>=<value>[&<option>=<value>...]]", "amqp://example.net:5672?jms.clientID=backend", "failover:(<connection-uri>[,<connection-uri>...])[?<option>=<value>[&<option>=<value>...]]", "failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=backend", "queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>", "queue.jobs = app1/work-items topic.notifications = app1/updates", "Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/configuration
A.3. Enabling and Disabling the Feature
A.3. Enabling and Disabling the Feature To disable the consistent network device naming on Dell systems that would normally have it on by default, pass the following option on the boot command line, both during and after installation: To enable this feature on other system types that meet the minimum requirements (see Section A.2, "System Requirements" ), pass the following option on the boot command line, both during and after installation: Unless the system meets the minimum requirements, this option will be ignored and the system will boot with the traditional network interface name format. If the biosdevname install option is specified, it must remain as a boot option for the lifetime of the system.
[ "biosdevname=0", "biosdevname=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-Consistent_Network_Device_Naming-Enabling_and_Disabling
Appendix F. Glossary
Appendix F. Glossary F.1. A access control The process of controlling what particular users are allowed to do. For example, access control to servers is typically based on an identity, established by a password or a certificate, and on rules regarding what that entity can do. See also ???TITLE??? . access control instructions (ACI) An access rule that specifies how subjects requesting access are to be identified or what rights are allowed or denied for a particular subject. See ???TITLE??? . access control list (ACL) A collection of access control entries that define a hierarchy of access rules to be evaluated when a server receives a request for access to a particular resource. See ???TITLE??? . administrator The person who installs and configures one or more Certificate System managers and sets up privileged users, or agents, for them. See also ???TITLE??? . Advanced Encryption Standard (AES) The Advanced Encryption Standard (AES), like its predecessor Data Encryption Standard (DES), is a FIPS-approved symmetric-key encryption standard. AES was adopted by the US government in 2002. It defines three block ciphers, AES-128, AES-192 and AES-256. The National Institute of Standards and Technology (NIST) defined the AES standard in U.S. FIPS PUB 197. For more information, see http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf . agent A user who belongs to a group authorized to manage ???TITLE??? for a Certificate System manager. See also ???TITLE??? , ???TITLE??? . agent-approved enrollment An enrollment that requires an agent to approve the request before the certificate is issued. agent services Services that can be administered by a Certificate System ???TITLE??? through HTML pages served by the Certificate System subsystem for which the agent has been assigned the necessary privileges. The HTML pages for administering such services. APDU Application protocol data unit. A communication unit (analogous to a byte) that is used in communications between a smart card and a smart card reader. attribute value assertion (AVA) An assertion of the form attribute = value , where attribute is a tag, such as o (organization) or uid (user ID), and value is a value such as "Red Hat, Inc." or a login name. AVAs are used to form the ???TITLE??? that identifies the subject of a certificate, called the ???TITLE??? of the certificate. audit log A log that records various system events. This log can be signed, providing proof that it was not tampered with, and can only be read by an auditor user. auditor A privileged user who can view the signed audit logs. authentication Confident identification; assurance that a party to some computerized transaction is not an impostor. Authentication typically involves the use of a password, certificate, PIN, or other information to validate identity over a computer network. See also ???TITLE??? , ???TITLE??? , ???TITLE??? , ???TITLE??? . authentication module A set of rules (implemented as a JavaTM class) for authenticating an end entity, agent, administrator, or any other entity that needs to interact with a Certificate System subsystem. In the case of typical end-user enrollment, after the user has supplied the information requested by the enrollment form, the enrollment servlet uses an authentication module associated with that form to validate the information and authenticate the user's identity. See ???TITLE??? . authorization Permission to access a resource controlled by a server. Authorization typically takes place after the ACLs associated with a resource have been evaluated by a server. See ???TITLE??? . automated enrollment A way of configuring a Certificate System subsystem that allows automatic authentication for end-entity enrollment, without human intervention. With this form of authentication, a certificate request that completes authentication module processing successfully is automatically approved for profile processing and certificate issuance. F.2. B bind DN A user ID, in the form of a distinguished name (DN), used with a password to authenticate to Red Hat Directory Server. F.3. C CA certificate A certificate that identifies a certificate authority. See also ???TITLE??? , ???TITLE??? , ???TITLE??? . CA hierarchy A hierarchy of CAs in which a root CA delegates the authority to issue certificates to subordinate CAs. Subordinate CAs can also expand the hierarchy by delegating issuing status to other CAs. See also ???TITLE??? , ???TITLE??? , ???TITLE??? . CA server key The SSL server key of the server providing a CA service. CA signing key The private key that corresponds to the public key in the CA certificate. A CA uses its signing key to sign certificates and CRLs. certificate Digital data, formatted according to the X.509 standard, that specifies the name of an individual, company, or other entity (the ???TITLE??? of the certificate) and certifies that a ???TITLE??? , which is also included in the certificate, belongs to that entity. A certificate is issued and digitally signed by a ???TITLE??? . A certificate's validity can be verified by checking the CA's ???TITLE??? through ???TITLE??? techniques. To be trusted within a ???TITLE??? , a certificate must be issued and signed by a CA that is trusted by other entities enrolled in the PKI. certificate authority (CA) A trusted entity that issues a ???TITLE??? after verifying the identity of the person or entity the certificate is intended to identify. A CA also renews and revokes certificates and generates CRLs. The entity named in the issuer field of a certificate is always a CA. Certificate authorities can be independent third parties or a person or organization using certificate-issuing server software, such as Red Hat Certificate System. certificate-based authentication Authentication based on certificates and public-key cryptography. See also ???TITLE??? . certificate chain A hierarchical series of certificates signed by successive certificate authorities. A CA certificate identifies a ???TITLE??? and is used to sign certificates issued by that authority. A CA certificate can in turn be signed by the CA certificate of a parent CA, and so on up to a ???TITLE??? . Certificate System allows any end entity to retrieve all the certificates in a certificate chain. certificate extensions An X.509 v3 certificate contains an extensions field that permits any number of additional fields to be added to the certificate. Certificate extensions provide a way of adding information such as alternative subject names and usage restrictions to certificates. A number of standard extensions have been defined by the PKIX working group. certificate fingerprint A ???TITLE??? associated with a certificate. The number is not part of the certificate itself, but is produced by applying a hash function to the contents of the certificate. If the contents of the certificate changes, even by a single character, the same function produces a different number. Certificate fingerprints can therefore be used to verify that certificates have not been tampered with. Certificate Management Messages over Cryptographic Message Syntax (CMC) Message format used to convey a request for a certificate to a Certificate Manager. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmc-02 . Certificate Management Message Formats (CMMF) Message formats used to convey certificate requests and revocation requests from end entities to a Certificate Manager and to send a variety of information to end entities. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. CMMF is subsumed by another proposed standard, ???TITLE??? . For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmmf-02 . Certificate Manager An independent Certificate System subsystem that acts as a certificate authority. A Certificate Manager instance issues, renews, and revokes certificates, which it can publish along with CRLs to an LDAP directory. It accepts requests from end entities. See ???TITLE??? . Certificate Manager agent A user who belongs to a group authorized to manage agent services for a Certificate Manager. These services include the ability to access and modify (approve and reject) certificate requests and issue certificates. certificate profile A set of configuration settings that defines a certain type of enrollment. The certificate profile sets policies for a particular type of enrollment along with an authentication method in a certificate profile. Certificate Request Message Format (CRMF) Format used for messages related to management of X.509 certificates. This format is a subset of CMMF. See also ???TITLE??? . For detailed information, see https://tools.ietf.org/html/rfc2511 . certificate revocation list (CRL) As defined by the X.509 standard, a list of revoked certificates by serial number, generated and signed by a ???TITLE??? . Certificate System See ???TITLE??? , ???TITLE??? . Certificate System subsystem One of the five Certificate System managers: ???TITLE??? , Online Certificate Status Manager, ???TITLE??? , Token Key Service, or Token Processing System. Certificate System console A console that can be opened for any single Certificate System instance. A Certificate System console allows the Certificate System administrator to control configuration settings for the corresponding Certificate System instance. chain of trust See ???TITLE??? . chained CA See ???TITLE??? . cipher See ???TITLE??? . client authentication The process of identifying a client to a server, such as with a name and password or with a certificate and some digitally signed data. See ???TITLE??? , ???TITLE??? , ???TITLE??? . client SSL certificate A certificate used to identify a client to a server using the SSL protocol. See ???TITLE??? . CMC See ???TITLE??? . CMC Enrollment Features that allow either signed enrollment or signed revocation requests to be sent to a Certificate Manager using an agent's signing certificate. These requests are then automatically processed by the Certificate Manager. CMMF See ???TITLE??? . CRL See ???TITLE??? . cross-pair certificate A certificate issued by one CA to another CA which is then stored by both CAs to form a circle of trust. The two CAs issue certificates to each other, and then store both cross-pair certificates as a certificate pair. CRMF See ???TITLE??? . cross-certification The exchange of certificates by two CAs in different certification hierarchies, or chains. Cross-certification extends the chain of trust so that it encompasses both hierarchies. See also ???TITLE??? . cryptographic algorithm A set of rules or directions used to perform cryptographic operations such as ???TITLE??? and ???TITLE??? . Cryptographic Message Syntax (CS) The syntax used to digitally sign, digest, authenticate, or encrypt arbitrary messages, such as CMMF. cryptographic module See ???TITLE??? . cryptographic service provider (CSP) A cryptographic module that performs cryptographic services, such as key generation, key storage, and encryption, on behalf of software that uses a standard interface such as that defined by PKCS #11 to request such services. CSP See ???TITLE??? . F.4. D Key Recovery Authority An optional, independent Certificate System subsystem that manages the long-term archival and recovery of RSA encryption keys for end entities. A Certificate Manager can be configured to archive end entities' encryption keys with a Key Recovery Authority before issuing new certificates. The Key Recovery Authority is useful only if end entities are encrypting data, such as sensitive email, that the organization may need to recover someday. It can be used only with end entities that support dual key pairs: two separate key pairs, one for encryption and one for digital signatures. Key Recovery Authority agent A user who belongs to a group authorized to manage agent services for a Key Recovery Authority, including managing the request queue and authorizing recovery operation using HTML-based administration pages. Key Recovery Authority recovery agent One of the m of n people who own portions of the storage key for the ???TITLE??? . Key Recovery Authority storage key Special key used by the Key Recovery Authority to encrypt the end entity's encryption key after it has been decrypted with the Key Recovery Authority's private transport key. The storage key never leaves the Key Recovery Authority. Key Recovery Authority transport certificate Certifies the public key used by an end entity to encrypt the entity's encryption key for transport to the Key Recovery Authority. The Key Recovery Authority uses the private key corresponding to the certified public key to decrypt the end entity's key before encrypting it with the storage key. decryption Unscrambling data that has been encrypted. See ???TITLE??? . delta CRL A CRL containing a list of those certificates that have been revoked since the last full CRL was issued. digital ID See ???TITLE??? . digital signature To create a digital signature, the signing software first creates a ???TITLE??? from the data to be signed, such as a newly issued certificate. The one-way hash is then encrypted with the private key of the signer. The resulting digital signature is unique for each piece of data signed. Even a single comma added to a message changes the digital signature for that message. Successful decryption of the digital signature with the signer's public key and comparison with another hash of the same data provides ???TITLE??? . Verification of the ???TITLE??? for the certificate containing the public key provides authentication of the signer. See also ???TITLE??? , ???TITLE??? . distribution points Used for CRLs to define a set of certificates. Each distribution point is defined by a set of certificates that are issued. A CRL can be created for a particular distribution point. distinguished name (DN) A series of AVAs that identify the subject of a certificate. See ???TITLE??? . dual key pair Two public-private key pairs, four keys altogether, corresponding to two separate certificates. The private key of one pair is used for signing operations, and the public and private keys of the other pair are used for encryption and decryption operations. Each pair corresponds to a separate ???TITLE??? . See also ???TITLE??? , ???TITLE??? , ???TITLE??? . F.5. E eavesdropping Surreptitious interception of information sent over a network by an entity for which the information is not intended. Elliptic Curve Cryptography (ECC) A cryptographic algorithm which uses elliptic curves to create additive logarithms for the mathematical problems which are the basis of the cryptographic keys. ECC ciphers are more efficient to use than RSA ciphers and, because of their intrinsic complexity, are stronger at smaller bits than RSA ciphers. encryption Scrambling information in a way that disguises its meaning. See ???TITLE??? . encryption key A private key used for encryption only. An encryption key and its equivalent public key, plus a ???TITLE??? and its equivalent public key, constitute a ???TITLE??? . enrollment The process of requesting and receiving an X.509 certificate for use in a ???TITLE??? . Also known as registration . end entity In a ???TITLE??? , a person, router, server, or other entity that uses a ???TITLE??? to identify itself. extensions field See ???TITLE??? . F.6. F Federal Bridge Certificate Authority (FBCA) A configuration where two CAs form a circle of trust by issuing cross-pair certificates to each other and storing the two cross-pair certificates as a single certificate pair. fingerprint See ???TITLE??? . FIPS PUBS 140 Federal Information Standards Publications (FIPS PUBS) 140 is a US government standard for implementations of cryptographic modules, hardware or software that encrypts and decrypts data or performs other cryptographic operations, such as creating or verifying digital signatures. Many products sold to the US government must comply with one or more of the FIPS standards. See http://www.nist.gov/itl/fipscurrent.cfm . firewall A system or combination of systems that enforces a boundary between two or more networks. F.7. H Hypertext Transport Protocol (HTTP) and Hypertext Transport Protocol Secure (HTTPS) Protocols used to communicate with web servers. HTTPS consists of communication over HTTP (Hypertext Transfer Protocol) within a connection encrypted by Transport Layer Security (TLS). The main purpose of HTTPS is authentication of the visited website and protection of privacy and integrity of the exchanged data. F.8. I impersonation The act of posing as the intended recipient of information sent over a network. Impersonation can take two forms: ???TITLE??? and ???TITLE??? . input In the context of the certificate profile feature, it defines the enrollment form for a particular certificate profile. Each input is set, which then dynamically creates the enrollment form from all inputs configured for this enrollment. intermediate CA A CA whose certificate is located between the root CA and the issued certificate in a ???TITLE??? . IP spoofing The forgery of client IP addresses. IPv4 and IPv6 Certificate System supports both IPv4 and IPv6 address namespaces for communications and operations with all subsystems and tools, as well as for clients, subsystem creation, and token and certificate enrollment. F.9. J JAR file A digital envelope for a compressed collection of files organized according to the ???TITLE??? . JavaTM archive (JAR) format A set of conventions for associating digital signatures, installer scripts, and other information with files in a directory. JavaTM Cryptography Architecture (JCA) The API specification and reference developed by Sun Microsystems for cryptographic services. See http://java.sun.com/products/jdk/1.2/docs/guide/security/CryptoSpec.Introduction . JavaTM Development Kit (JDK) Software development kit provided by Sun Microsystems for developing applications and applets using the JavaTM programming language. JavaTM Native Interface (JNI) A standard programming interface that provides binary compatibility across different implementations of the JavaTM Virtual Machine (JVM) on a given platform, allowing existing code written in a language such as C or C++ for a single platform to bind to JavaTM. See http://java.sun.com/products/jdk/1.2/docs/guide/jni/index.html . JavaTM Security Services (JSS) A JavaTM interface for controlling security operations performed by Network Security Services (NSS). F.10. K KEA See ???TITLE??? . key A large number used by a ???TITLE??? to encrypt or decrypt data. A person's ???TITLE??? , for example, allows other people to encrypt messages intended for that person. The messages must then be decrypted by using the corresponding ???TITLE??? . key exchange A procedure followed by a client and server to determine the symmetric keys they will both use during an SSL session. Key Exchange Algorithm (KEA) An algorithm used for key exchange by the US Government. KEYGEN tag An HTML tag that generates a key pair for use with a certificate. F.11. L Lightweight Directory Access Protocol (LDAP) A directory service protocol designed to run over TCP/IP and across multiple platforms. LDAP is a simplified version of Directory Access Protocol (DAP), used to access X.500 directories. LDAP is under IETF change control and has evolved to meet Internet requirements. linked CA An internally deployed ???TITLE??? whose certificate is signed by a public, third-party CA. The internal CA acts as the root CA for certificates it issues, and the third- party CA acts as the root CA for certificates issued by other CAs that are linked to the same third-party root CA. Also known as "chained CA" and by other terms used by different public CAs. F.12. M manual authentication A way of configuring a Certificate System subsystem that requires human approval of each certificate request. With this form of authentication, a servlet forwards a certificate request to a request queue after successful authentication module processing. An agent with appropriate privileges must then approve each request individually before profile processing and certificate issuance can proceed. MD5 A message digest algorithm that was developed by Ronald Rivest. See also ???TITLE??? . message digest See ???TITLE??? . misrepresentation The presentation of an entity as a person or organization that it is not. For example, a website might pretend to be a furniture store when it is really a site that takes credit-card payments but never sends any goods. Misrepresentation is one form of ???TITLE??? . See also ???TITLE??? . F.13. N Network Security Services (NSS) A set of libraries designed to support cross-platform development of security-enabled communications applications. Applications built using the NSS libraries support the ???TITLE??? protocol for authentication, tamper detection, and encryption, and the PKCS #11 protocol for cryptographic token interfaces. NSS is also available separately as a software development kit. nonrepudiation The inability by the sender of a message to deny having sent the message. A ???TITLE??? provides one form of nonrepudiation. non-TMS Non-token management system. Refers to a configuration of subsystems (the CA and, optionally, KRA and OCSP) which do not handle smart cards directly. See also ???TITLE??? . F.14. O object signing A method of file signing that allows software developers to sign Java code, JavaScript scripts, or any kind of file and allows users to identify the signers and control access by signed code to local system resources. object-signing certificate A certificate whose associated private key is used to sign objects; related to ???TITLE??? . OCSP Online Certificate Status Protocol. one-way hash A number of fixed-length generated from data of arbitrary length with the aid of a hashing algorithm. The number, also called a message digest, is unique to the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. The content of the hashed data cannot be deduced from the hash. operation The specific operation, such as read or write, that is being allowed or denied in an access control instruction. output In the context of the certificate profile feature, it defines the resulting form from a successful certificate enrollment for a particular certificate profile. Each output is set, which then dynamically creates the form from all outputs configured for this enrollment. F.15. P password-based authentication Confident identification by means of a name and password. See also ???TITLE??? , ???TITLE??? . PKCS #7 The public-key cryptography standard that governs signing and encryption. PKCS #10 The public-key cryptography standard that governs certificate requests. PKCS #11 The public-key cryptography standard that governs cryptographic tokens such as smart cards. PKCS #11 module A driver for a cryptographic device that provides cryptographic services, such as encryption and decryption, through the PKCS #11 interface. A PKCS #11 module, also called a cryptographic module or cryptographic service provider , can be implemented in either hardware or software. A PKCS #11 module always has one or more slots, which may be implemented as physical hardware slots in some form of physical reader, such as for smart cards, or as conceptual slots in software. Each slot for a PKCS #11 module can in turn contain a token, which is the hardware or software device that actually provides cryptographic services and optionally stores certificates and keys. Red Hat provides a built-in PKCS #11 module with Certificate System. PKCS #12 The public-key cryptography standard that governs key portability. private key One of a pair of keys used in public-key cryptography. The private key is kept secret and is used to decrypt data encrypted with the corresponding ???TITLE??? . proof-of-archival (POA) Data signed with the private Key Recovery Authority transport key that contains information about an archived end-entity key, including key serial number, name of the Key Recovery Authority, ???TITLE??? of the corresponding certificate, and date of archival. The signed proof-of-archival data are the response returned by the Key Recovery Authority to the Certificate Manager after a successful key archival operation. See also ???TITLE??? . public key One of a pair of keys used in public-key cryptography. The public key is distributed freely and published as part of a ???TITLE??? . It is typically used to encrypt data sent to the public key's owner, who then decrypts the data with the corresponding ???TITLE??? . public-key cryptography A set of well-established techniques and standards that allow an entity to verify its identity electronically or to sign and encrypt electronic data. Two keys are involved, a public key and a private key. A ???TITLE??? is published as part of a certificate, which associates that key with a particular identity. The corresponding private key is kept secret. Data encrypted with the public key can be decrypted only with the private key. public-key infrastructure (PKI) The standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates in a networked environment. F.16. R RC2, RC4 Cryptographic algorithms developed for RSA Data Security by Rivest. See also ???TITLE??? . Red Hat Certificate System A highly configurable set of software components and tools for creating, deploying, and managing certificates. Certificate System is comprised of five major subsystems that can be installed in different Certificate System instances in different physical locations: ???TITLE??? , Online Certificate Status Manager, ???TITLE??? , Token Key Service, and Token Processing System. registration See ???TITLE??? . root CA The ???TITLE??? with a self-signed certificate at the top of a certificate chain. See also ???TITLE??? , ???TITLE??? . RSA algorithm Short for Rivest-Shamir-Adleman, a public-key algorithm for both encryption and authentication. It was developed by Ronald Rivest, Adi Shamir, and Leonard Adleman and introduced in 1978. RSA key exchange A key-exchange algorithm for SSL based on the RSA algorithm. F.17. S sandbox A JavaTM term for the carefully defined limits within which JavaTM code must operate. Simple Certificate Enrollment Protocol (SCEP) A protocol designed by Cisco to specify a way for a router to communicate with a CA for router certificate enrollment. Certificate System supports SCEP's CA mode of operation, where the request is encrypted with the CA signing certificate. secure channel A security association between the TPS and the smart card which allows encrypted communciation based on a shared master key generated by the TKS and the smart card APDUs. Secure Sockets Layer (SSL) A protocol that allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols. security domain A centralized repository or inventory of PKI subsystems. Its primary purpose is to facilitate the installation and configuration of new PKI services by automatically establishing trusted relationships between subsystems. Security-Enhanced Linux (SELinux) Security-enhanced Linux (SELinux) is a set of security protocols enforcing mandatory access control on Linux system kernels. SELinux was developed by the United States National Security Agency to keep applications from accessing confidential or protected files through lenient or flawed access controls. self tests A feature that tests a Certificate System instance both when the instance starts up and on-demand. server authentication The process of identifying a server to a client. See also ???TITLE??? . server SSL certificate A certificate used to identify a server to a client using the ???TITLE??? protocol. servlet JavaTM code that handles a particular kind of interaction with end entities on behalf of a Certificate System subsystem. For example, certificate enrollment, revocation, and key recovery requests are each handled by separate servlets. SHA Secure Hash Algorithm, a hash function used by the US government. signature algorithm A cryptographic algorithm used to create digital signatures. Certificate System supports the MD5 and ???TITLE??? signing algorithms. See also ???TITLE??? , ???TITLE??? . signed audit log See ???TITLE??? . signing certificate A certificate whose public key corresponds to a private key used to create digital signatures. For example, a Certificate Manager must have a signing certificate whose public key corresponds to the private key it uses to sign the certificates it issues. signing key A private key used for signing only. A signing key and its equivalent public key, plus an ???TITLE??? and its equivalent public key, constitute a ???TITLE??? . single sign-on In Certificate System, a password that simplifies the way to sign on to Red Hat Certificate System by storing the passwords for the internal database and tokens. Each time a user logs on, he is required to enter this single password. The ability for a user to log in once to a single computer and be authenticated automatically by a variety of servers within a network. Partial single sign-on solutions can take many forms, including mechanisms for automatically tracking passwords used with different servers. Certificates support single sign-on within a ???TITLE??? . A user can log in once to a local client's private-key database and, as long as the client software is running, rely on ???TITLE??? to access each server within an organization that the user is allowed to access. slot The portion of a ???TITLE??? , implemented in either hardware or software, that contains a ???TITLE??? . smart card A small device that contains a microprocessor and stores cryptographic information, such as keys and certificates, and performs cryptographic operations. Smart cards implement some or all of the ???TITLE??? interface. spoofing Pretending to be someone else. For example, a person can pretend to have the email address [email protected] or a computer can identify itself as a site called www.redhat.com when it is not. Spoofing is one form of ???TITLE??? . See also ???TITLE??? . SSL See ???TITLE??? . subject The entity identified by a ???TITLE??? . In particular, the subject field of a certificate contains a ???TITLE??? that uniquely describes the certified entity. subject name A ???TITLE??? that uniquely describes the ???TITLE??? of a ???TITLE??? . subordinate CA A certificate authority whose certificate is signed by another subordinate CA or by the root CA. See ???TITLE??? , ???TITLE??? . symmetric encryption An encryption method that uses the same cryptographic key to encrypt and decrypt a given message. F.18. T tamper detection A mechanism ensuring that data received in electronic form entirely corresponds with the original version of the same data. token A hardware or software device that is associated with a ???TITLE??? in a ???TITLE??? . It provides cryptographic services and optionally stores certificates and keys. token key service (TKS) A subsystem in the token management system which derives specific, separate keys for every smart card based on the smart card APDUs and other shared information, like the token CUID. token management system (TMS) The interrelated subsystems - CA, TKS, TPS, and, optionally, the KRA - which are used to manage certificates on smart cards (tokens). transport layer security (TLS) A set of rules governing server authentication, client authentication, and encrypted communication between servers and clients. token processing system (TPS) A subsystem which interacts directly the Enterprise Security Client and smart cards to manage the keys and certificates on those smart cards. tree hierarchy The hierarchical structure of an LDAP directory. trust Confident reliance on a person or other entity. In a ???TITLE??? , trust refers to the relationship between the user of a certificate and the ???TITLE??? that issued the certificate. If a CA is trusted, then valid certificates issued by that CA can be trusted. F.19. U UTF-8 The certificate enrollment pages support all UTF-8 characters for specific fields (common name, organizational unit, requester name, and additional notes). The UTF-8 strings are searchable and correctly display in the CA, OCSP, and KRA end user and agents services pages. However, the UTF-8 support does not extend to internationalized domain names, such as those used in email addresses. F.20. V virtual private network (VPN) A way of connecting geographically distant divisions of an enterprise. The VPN allows the divisions to communicate over an encrypted channel, allowing authenticated, confidential transactions that would normally be restricted to a private network. F.21. X X.509 version 1 and version 3 Digital certificate formats recommended by the International Telecommunications Union (ITU).
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/glossary
5.11. The multipathd Commands
5.11. The multipathd Commands The multipathd commands can be used to administer the multipathd daemon. For information on the available multipathd commands, see the multipathd (8) man page. The following command shows the standard default format for the output of the multipathd show maps command. Some multipathd commands include a format option followed by a wildcard. You can display a list of available wildcards with the following command. As of Red Hat Enterprise Linux release 7.3, the multipathd command supports new format commands that show the status of multipath devices and paths in "raw" format versions. In raw format, no headers are printed and the fields are not padded to align the columns with the headers. Instead, the fields print exactly as specified in the format string. This output can then be more easily used for scripting. You can display the wildcards used in the format string with the multipathd show wildcards command. The following multipathd commands show the multipath devices that multipathd is monitoring, using a format string with multipath wildcards, in regular and raw format. The following multipathd commands show the paths that multipathd is monitoring, using a format string with multipath wildcards, in regular and raw format. The following commands show the difference between the non-raw and raw formats for the multipathd show maps . Note that in raw format there are no headers and only a single space between the columns.
[ "multipathd show maps name sysfs uuid mpathc dm-0 360a98000324669436c2b45666c567942", "multipathd show wildcards", "list|show maps|multipaths format USDformat list|show maps|multipaths raw format USDformat", "list|show paths format USDformat list|show paths raw format USDformat", "multipathd show maps format \"%n %w %d %s\" name uuid sysfs vend/prod/rev mpathc 360a98000324669436c2b45666c567942 dm-0 NETAPP,LUN multipathd show maps raw format \"%n %w %d %s\" mpathc 360a98000324669436c2b45666c567942 dm-0 NETAPP,LUN" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/multipathd_commands
Chapter 3. Installing RHEL AI on AWS
Chapter 3. Installing RHEL AI on AWS There are multiple ways you can install, and deploy Red Hat Enterprise Linux AI on AWS. You can purchase RHEL AI from the the AWS marketplace . You can download the RHEL AI RAW file on the RHEL AI download page and convert it to an AWS image. For installing and deploying RHEL AI using the RAW file, you must first convert the RHEL AI image into an Amazon Machine Image (AMI). 3.1. Converting the RHEL AI image to an AWS AMI Before deploying RHEL AI on an AWS machine, you must set up a S3 bucket and convert the RHEL AI image to a AWS AMI. In the following process, you create the following resources: An S3 bucket with the RHEL AI image AWS EC2 snapshots An AWS AMI An AWS instance Prerequisites You have an Access Key ID configured in the AWS IAM account manager . Procedure Install the AWS command-line tool by following the AWS documentation You need to create a S3 bucket and set the permissions to allow image file conversion to AWS snapshots. Create the necessary environment variables by running the following commands: USD export BUCKET=<custom_bucket_name> USD export RAW_AMI=nvidia-bootc.ami USD export AMI_NAME="rhel-ai" USD export DEFAULT_VOLUME_SIZE=1000 Note On AWS, the DEFAULT_VOLUME_SIZE is measured GBs. You can create an S3 bucket by running the following command: USD aws s3 mb s3://USDBUCKET You must create a trust-policy.json file with the necessary configurations for generating a S3 role for your bucket: USD printf '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }' > trust-policy.json Create an S3 role for your bucket that you can name. In the following example command, vmiport is the name of the role. USD aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json You must create a role-policy.json file with the necessary configurations for generating a policy for your bucket: USD printf '{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::%s", "arn:aws:s3:::%s/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }' USDBUCKET USDBUCKET > role-policy.json Create a policy for your bucket by running the following command: USD aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json Now that your S3 bucket is set up, you need to download the RAW image from Red Hat Enterprise Linux AI download page Copy the RAW image link and add it to the following command: USD curl -Lo disk.raw <link-to-raw-file> Upload the image to the S3 bucket with the following command: USD aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI Convert the image to a snapshot and store it in the task_id variable name by running the following commands: USD printf '{ "Description": "my-image", "Format": "raw", "UserBucket": { "S3Bucket": "%s", "S3Key": "%s" } }' USDBUCKET USDRAW_AMI > containers.json USD task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId) You can check the progress of the disk image to snapshot conversion job with the following command: USD aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active Once the conversion job is complete, you can get the snapshot ID and store it in a variable called snapshot_id by running the following command: USD snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId=="'USD{task_id}'") | .SnapshotTaskDetail.SnapshotId') Add a tag name to the snapshot, so it's easier to identify, by running the following command: USD aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value="USDAMI_NAME" Register an AMI from the snapshot with the following command: USD ami_id=USD(aws ec2 register-image \ --name "USDAMI_NAME" \ --description "USDAMI_NAME" \ --architecture x86_64 \ --root-device-name /dev/sda1 \ --block-device-mappings "DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}" \ --virtualization-type hvm \ --ena-support \ | jq -r .ImageId) You can add another tag name to identify the AMI by running the following command: USD aws ec2 create-tags --resources USDami_id --tags Key=Name,Value="USDAMI_NAME" 3.2. Deploying your instance on AWS using the CLI You can launch the AWS instance with your new RHEL AI AMI from the AWS web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch your AWS instance with the custom AMI. If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI AMI. For more information, see "Converting the RHEL AI image to an AWS AMI". You have the AWS command-line tool installed and is properly configured with your aws_access_key_id and aws_secret_access_key. You configured your Virtual Private Cloud (VPC). You created a subnet for your instance. You created a SSH key-pair. You created a security group on AWS. Procedure For various parameters, you need to gather the ID of the variable. To access the image ID, run the following command: USD aws ec2 describe-images --owners self To access the security group ID, run the following command: USD aws ec2 describe-security-groups To access the subnet ID, run the following command: USD aws ec2 describe-subnets Populate environment variables for when you create the instance USD instance_name=rhel-ai-instance USD ami=<ami-id> USD instance_type=<instance-type-size> USD key_name=<key-pair-name> USD security_group=<sg-id> USD disk_size=<size-of-disk> Create your instance using the variables by running the following command: USD aws ec2 run-instances \ --image-id USDami \ --instance-type USDinstance_type \ --key-name USDkey_name \ --security-group-ids USDsecurity_group \ --subnet-id USDsubnet \ --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]' User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, you need to run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train
[ "export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000", "aws s3 mb s3://USDBUCKET", "printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json", "curl -Lo disk.raw <link-to-raw-file>", "aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI", "printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json", "task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active", "snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId==\"'USD{task_id}'\") | .SnapshotTaskDetail.SnapshotId')", "aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"", "ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)", "aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"", "aws ec2 describe-images --owners self", "aws ec2 describe-security-groups", "aws ec2 describe-subnets", "instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>", "aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/installing/installing_on_aws
Chapter 6. APNS Component
Chapter 6. APNS Component Available as of Camel version 2.8 The apns component is used for sending notifications to iOS devices. The apns components use javapns library. The component supports sending notifications to Apple Push Notification Servers (APNS) and consuming feedback from the servers. The consumer is configured with 3600 seconds for polling by default because it is a best practice to consume feedback stream from Apple Push Notification Servers only from time to time. For example: every 1 hour to avoid flooding the servers. The feedback stream gives informations about inactive devices. You only need to get this informations every some hours if your mobile application is not a heavily used one. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-apns</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 6.1. URI format To send notifications: apns:notify[?options] To consume feedback: apns:consumer[?options] 6.2. Options The APNS component supports 2 options, which are listed below. Name Description Default Type apnsService (common) Required The ApnsService to use. The org.apache.camel.component.apns.factory.ApnsServiceFactory can be used to build a ApnsService ApnsService resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The APNS endpoint is configured using URI syntax: with the following path and query parameters: 6.2.1. Path Parameters (1 parameters): Name Description Default Type name Name of the endpoint String 6.2.2. Query Parameters (20 parameters): Name Description Default Type tokens (common) Configure this property in case you want to statically declare tokens related to devices you want to notify. Tokens are separated by comma. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 6.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.apns.apns-service The ApnsService to use. The org.apache.camel.component.apns.factory.ApnsServiceFactory can be used to build a ApnsService. The option is a com.notnoop.apns.ApnsService type. String camel.component.apns.enabled Enable apns component true Boolean camel.component.apns.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean You can append query options to the URI in the following format, ?option=value&option=value&... 6.3.1. Component The ApnsComponent must be configured with a com.notnoop.apns.ApnsService . The service can be created and configured using the org.apache.camel.component.apns.factory.ApnsServiceFactory . See further below for an example. And as well in the test source code . 6.3.1.1. SSL Setting In order to use secure connection, an instance of org.apache.camel.util.jsse.SSLContextParameters should be injected to org.apache.camel.component.apns.factory.ApnsServiceFactory which is used to configure the component. See the test resources for an example. ssl example 6.4. Exchange data format When Camel will fetch feedback data corresponding to inactive devices, it will retrieve a List of InactiveDevice objects. Each InactiveDevice object of the retrieved list will be setted as the In body, and then processed by the consumer endpoint. 6.5. Message Headers Camel Apns uses these headers. Property Default Description CamelApnsTokens Empty by default. CamelApnsMessageType STRING, PAYLOAD, APNS_NOTIFICATION In case you choose PAYLOAD for the message type, then the message will be considered as a APNS payload and sent as is. In case you choose STRING, message will be converted as a APNS payload. From Camel 2.16 onwards APNS_NOTIFICATION is used for sending message body as com.notnoop.apns.ApnsNotification types. 6.6. ApnsServiceFactory builder callback ApnsServiceFactory comes with the empty callback method that could be used to configure (or even replace) the default ApnsServiceBuilder instance. The signature of the method could look as follows: protected ApnsServiceBuilder configureServiceBuilder(ApnsServiceBuilder serviceBuilder); And could be used like as follows: ApnsServiceFactory proxiedApnsServiceFactory = new ApnsServiceFactory(){ @Override protected ApnsServiceBuilder configureServiceBuilder(ApnsServiceBuilder serviceBuilder) { return serviceBuilder.withSocksProxy("my.proxy.com", 6666); } }; 6.7. Samples 6.7.1. Camel Xml route <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <!-- Replace by desired values --> <bean id="apnsServiceFactory" class="org.apache.camel.component.apns.factory.ApnsServiceFactory"> <!-- Optional configuration of feedback host and port --> <!-- <property name="feedbackHost" value="localhost" /> --> <!-- <property name="feedbackPort" value="7843" /> --> <!-- Optional configuration of gateway host and port --> <!-- <property name="gatewayHost" value="localhost" /> --> <!-- <property name="gatewayPort" value="7654" /> --> <!-- Declaration of certificate used --> <!-- from Camel 2.11 onwards you can use prefix: classpath:, file: to refer to load the certificate from classpath or file. Default it classpath --> <property name="certificatePath" value="certificate.p12" /> <property name="certificatePassword" value="MyCertPassword" /> <!-- Optional connection strategy - By Default: No need to configure --> <!-- Possible options: NON_BLOCKING, QUEUE, POOL or Nothing --> <!-- <property name="connectionStrategy" value="POOL" /> --> <!-- Optional pool size --> <!-- <property name="poolSize" value="15" /> --> <!-- Optional connection strategy - By Default: No need to configure --> <!-- Possible options: EVERY_HALF_HOUR, EVERY_NOTIFICATION or Nothing (Corresponds to NEVER javapns option) --> <!-- <property name="reconnectionPolicy" value="EVERY_HALF_HOUR" /> --> </bean> <bean id="apnsService" factory-bean="apnsServiceFactory" factory-method="getApnsService" /> <!-- Replace this declaration by wanted configuration --> <bean id="apns" class="org.apache.camel.component.apns.ApnsComponent"> <property name="apnsService" ref="apnsService" /> </bean> <camelContext id="camel-apns-test" xmlns="http://camel.apache.org/schema/spring"> <route id="apns-test"> <from uri="apns:consumer?initialDelay=10&amp;delay=3600&amp;timeUnit=SECONDS" /> <to uri="log:org.apache.camel.component.apns?showAll=true&amp;multiline=true" /> <to uri="mock:result" /> </route> </camelContext> </beans> 6.7.2. Camel Java route Create camel context and declare apns component programmatically protected CamelContext createCamelContext() throws Exception { CamelContext camelContext = super.createCamelContext(); ApnsServiceFactory apnsServiceFactory = new ApnsServiceFactory(); apnsServiceFactory.setCertificatePath("classpath:/certificate.p12"); apnsServiceFactory.setCertificatePassword("MyCertPassword"); ApnsService apnsService = apnsServiceFactory.getApnsService(camelContext); ApnsComponent apnsComponent = new ApnsComponent(apnsService); camelContext.addComponent("apns", apnsComponent); return camelContext; } [[APNS-ApnsProducer-iOStargetdevicedynamicallyconfiguredviaheader:"CamelApnsTokens"]] ApnsProducer - iOS target device dynamically configured via header: "CamelApnsTokens" protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() throws Exception { from("direct:test") .setHeader(ApnsConstants.HEADER_TOKENS, constant(IOS_DEVICE_TOKEN)) .to("apns:notify"); } } } ApnsProducer - iOS target device statically configured via uri protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() throws Exception { from("direct:test"). to("apns:notify?tokens=" + IOS_DEVICE_TOKEN); } }; } ApnsConsumer from("apns:consumer?initialDelay=10&delay=3600&timeUnit=SECONDS") .to("log:com.apache.camel.component.apns?showAll=true&multiline=true") .to("mock:result"); 6.8. See Also Component Endpoint * Blog about using APNS (in french)
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-apns</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "apns:notify[?options]", "apns:consumer[?options]", "apns:name", "protected ApnsServiceBuilder configureServiceBuilder(ApnsServiceBuilder serviceBuilder);", "ApnsServiceFactory proxiedApnsServiceFactory = new ApnsServiceFactory(){ @Override protected ApnsServiceBuilder configureServiceBuilder(ApnsServiceBuilder serviceBuilder) { return serviceBuilder.withSocksProxy(\"my.proxy.com\", 6666); } };", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:camel=\"http://camel.apache.org/schema/spring\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <!-- Replace by desired values --> <bean id=\"apnsServiceFactory\" class=\"org.apache.camel.component.apns.factory.ApnsServiceFactory\"> <!-- Optional configuration of feedback host and port --> <!-- <property name=\"feedbackHost\" value=\"localhost\" /> --> <!-- <property name=\"feedbackPort\" value=\"7843\" /> --> <!-- Optional configuration of gateway host and port --> <!-- <property name=\"gatewayHost\" value=\"localhost\" /> --> <!-- <property name=\"gatewayPort\" value=\"7654\" /> --> <!-- Declaration of certificate used --> <!-- from Camel 2.11 onwards you can use prefix: classpath:, file: to refer to load the certificate from classpath or file. Default it classpath --> <property name=\"certificatePath\" value=\"certificate.p12\" /> <property name=\"certificatePassword\" value=\"MyCertPassword\" /> <!-- Optional connection strategy - By Default: No need to configure --> <!-- Possible options: NON_BLOCKING, QUEUE, POOL or Nothing --> <!-- <property name=\"connectionStrategy\" value=\"POOL\" /> --> <!-- Optional pool size --> <!-- <property name=\"poolSize\" value=\"15\" /> --> <!-- Optional connection strategy - By Default: No need to configure --> <!-- Possible options: EVERY_HALF_HOUR, EVERY_NOTIFICATION or Nothing (Corresponds to NEVER javapns option) --> <!-- <property name=\"reconnectionPolicy\" value=\"EVERY_HALF_HOUR\" /> --> </bean> <bean id=\"apnsService\" factory-bean=\"apnsServiceFactory\" factory-method=\"getApnsService\" /> <!-- Replace this declaration by wanted configuration --> <bean id=\"apns\" class=\"org.apache.camel.component.apns.ApnsComponent\"> <property name=\"apnsService\" ref=\"apnsService\" /> </bean> <camelContext id=\"camel-apns-test\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"apns-test\"> <from uri=\"apns:consumer?initialDelay=10&amp;delay=3600&amp;timeUnit=SECONDS\" /> <to uri=\"log:org.apache.camel.component.apns?showAll=true&amp;multiline=true\" /> <to uri=\"mock:result\" /> </route> </camelContext> </beans>", "protected CamelContext createCamelContext() throws Exception { CamelContext camelContext = super.createCamelContext(); ApnsServiceFactory apnsServiceFactory = new ApnsServiceFactory(); apnsServiceFactory.setCertificatePath(\"classpath:/certificate.p12\"); apnsServiceFactory.setCertificatePassword(\"MyCertPassword\"); ApnsService apnsService = apnsServiceFactory.getApnsService(camelContext); ApnsComponent apnsComponent = new ApnsComponent(apnsService); camelContext.addComponent(\"apns\", apnsComponent); return camelContext; }", "protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() throws Exception { from(\"direct:test\") .setHeader(ApnsConstants.HEADER_TOKENS, constant(IOS_DEVICE_TOKEN)) .to(\"apns:notify\"); } } }", "protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() throws Exception { from(\"direct:test\"). to(\"apns:notify?tokens=\" + IOS_DEVICE_TOKEN); } }; }", "from(\"apns:consumer?initialDelay=10&delay=3600&timeUnit=SECONDS\") .to(\"log:com.apache.camel.component.apns?showAll=true&multiline=true\") .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/apns-component
Chapter 1. Architectures
Chapter 1. Architectures Red Hat Enterprise Linux 7 is available as a single kit on the following architectures [1] : 64-bit AMD 64-bit Intel IBM POWER7 IBM System z [2] In this release, Red Hat brings together improvements for servers and systems, as well as for the overall Red Hat open source experience. [1] Note that the Red Hat Enterprise Linux 7 installation is only supported on 64-bit hardware. Red Hat Enterprise Linux 7 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Note that Red Hat Enterprise Linux 7 supports IBM zEnterprise 196 hardware or later; IBM System z10 mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-architectures
Chapter 1. Introduction
Chapter 1. Introduction 1.1. Why Performance Optimization Matters in Virtualization In KVM Virtualization, guests are represented by processes on the host machine. This means that processing power, memory, and other resources of the host are used to emulate the functions and capabilities of the guest's virtual hardware. However, guest hardware can be less effective at using the resources than the host. Therefore, adjusting the amount of allocated host resources may be needed for the guest to perform its tasks at the expected speed. In addition, various types of virtual hardware may have different levels of overhead, so an appropriate virtual hardware configuration can have significant impact on guest performance. Finally, depending on the circumstances, specific configurations enable virtual machines to use host resources more efficiently.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-introduction
Chapter 10. Caching policy for object buckets
Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2.
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/caching-policy-for-object-buckets_rhodf
Chapter 13. Event-Driven Ansible logging strategy
Chapter 13. Event-Driven Ansible logging strategy Event-Driven Ansible offers an audit logging solution over its resources. Each supported create, read, update and delete (CRUD) operation is logged against rulebook activations, event streams, decision environments, projects, and activations. Some of these resources support further operations, such as sync, enable, disable, restart, start, and stop; for these operations, logging is supported as well. These logs are only retained for the lifecycle of its associated container. See the following sample logs for each supported logging operation. 13.1. Logging samples When the following APIs are called for each operation, you see the following audit logs: Rulebook activation EventStream Logs Decision Environment Project Activation Start/Stop
[ "1. Create 1. 2024-08-15 14:13:20,384 aap_eda.api.views.activation INFO Action: Create / ResourceType: RulebookActivation / ResourceName: quick_start_project / ResourceID: 53 / Organization: Default 2. Read 1. 2024-08-15 14:21:26,844 aap_eda.api.views.activation INFO Action: Read / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default 3. Disable 1. 2024-08-15 14:23:57,798 aap_eda.api.views.activation INFO Action: Disable / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default 4. Enable 1. 2024-08-15 14:24:16,472 aap_eda.api.views.activation INFO Action: Enable / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default 5. Delete 1. 2024-08-15 14:24:53,847 aap_eda.api.views.activation INFO Action: Delete / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default 6. Restart 2024-08-15 14:24:34,169 aap_eda.api.views.activation INFO Action: Restart / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default", "1. Create 1. 2024-08-15 13:46:26,903 aap_eda.api.views.webhook INFO Action: Create / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default 2. Update 1. 2024-08-15 13:56:17,440 aap_eda.api.views.webhook INFO Action: Update / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default 3. Read 1. 2024-08-15 13:56:56,271 aap_eda.api.views.webhook INFO Action: Read / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default 4. List 1. 2024-08-15 13:56:17,492 aap_eda.api.views.webhook INFO Action: List / ResourceType: EventStream / ResourceName: * / ResourceID: * / Organization: * 5. Delete 1. 2024-08-15 13:57:13,124 aap_eda.api.views.webhook INFO Action: Delete / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: None / Organization: Default", "1. Create 1. 2024-08-15 14:10:53,311 aap_eda.api.views.decision_environment INFO Action: Create / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default 2. Read 1. 2024-08-15 14:10:53,349 aap_eda.api.views.decision_environment INFO Action: Read / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default 3. Update 2024-08-15 14:11:20,970 aap_eda.api.views.decision_environment INFO Action: Update / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default 4. Delete 2024-08-15 14:11:42,369 aap_eda.api.views.decision_environment INFO Action: Delete / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: None / Organization: Default", "1. Create 1. 2024-08-15 14:05:26,874 aap_eda.api.views.project INFO Action: Create / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default 2. Read 1. 2024-08-15 14:05:26,913 aap_eda.api.views.project INFO Action: Read / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default 3. Update 1. 2024-08-15 14:06:08,255 aap_eda.api.views.project INFO Action: Update / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default 4. Sync 1. 2024-08-15 14:06:30,580 aap_eda.api.views.project INFO Action: Sync / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default 5. Delete 1. 2024-08-15 14:06:49,481 aap_eda.api.views.project INFO Action: Delete / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default", "1. Start 1. 2024-08-15 14:21:29,076 aap_eda.services.activation.activation_manager INFO Requested to start activation 1, starting. 2024-08-15 14:21:29,093 aap_eda.services.activation.activation_manager INFO Creating a new activation instance for activation: 1 2024-08-15 14:21:29,104 aap_eda.services.activation.activation_manager INFO Starting container for activation instance: 1 2. Stop 1. eda-activation-worker-1 | 2024-08-15 14:40:52,547 aap_eda.services.activation.activation_manager INFO Stop operation requested for activation id: 2 Stopping activation. eda-activation-worker-1 | 2024-08-15 14:40:52,550 aap_eda.services.activation.activation_manager INFO Activation 2 is already stopped. eda-activation-worker-1 | 2024-08-15 14:40:52,550 aap_eda.services.activation.activation_manager INFO Activation manager activation id: 2 Activation restart scheduled for 1 second. eda-activation-worker-1 | 2024-08-15 14:40:52,562 rq.worker INFO activation: Job OK (activation-2)" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-logging-strategy
Chapter 4. Configuring Operator-based broker deployments
Chapter 4. Configuring Operator-based broker deployments 4.1. How the Operator generates the broker configuration Before you use Custom Resource (CR) instances to configure your broker deployment, you should understand how the Operator generates the broker configuration. When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod. The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image. By default, the AMQ Broker Operator uses a built-in Init Container. The Init Container uses the main CR instance for your deployment to generate the configuration used by each broker application container. If you have specified address settings in the CR, the Operator generates a default configuration and then merges or replaces that configuration with the configuration specified in the CR. This process is described in the section that follows. 4.1.1. How the Operator generates the address settings configuration If you have included an address settings configuration in the main Custom Resource (CR) instance for your deployment, the Operator generates the address settings configuration for each broker as described below. The Operator runs the Init Container before the broker application container. The Init Container generates a default address settings configuration. The default address settings configuration is shown below. <address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match="activemq.management#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!-- default for catch all --> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <address-settings> If you have also specified an address settings configuration in your Custom Resource (CR) instance, the Init Container processes that configuration and converts it to XML. Based on the value of the applyRule property in the CR, the Init Container merges or replaces the default address settings configuration shown above with the configuration that you have specified in the CR. The result of this merge or replacement is the final address settings configuration that the broker will use. When the Init Container has finished generating the broker configuration (including address settings), the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. You can inspect the address settings configuration in the broker.xml configuration file. For a running broker, this file is located in the /home/jboss/amq-broker/etc directory. Additional resources For an example of using the applyRule property in a CR, see Section 4.2.3, "Matching address settings to configured addresses in an Operator-based broker deployment" . 4.1.2. Directory structure of a broker Pod When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod. The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image. When generating the configuration for a broker instance, the Init Container uses files contained in a default installation directory. This installation directory is on a volume that the Operator mounts to the broker Pod and which the Init Container and broker container share. The path that the Init Container uses to mount the shared volume is defined in an environment variable called CONFIG_INSTANCE_DIR . The default value of CONFIG_INSTANCE_DIR is /amq/init/config . In the documentation, this directory is referred to as <install_dir> . Note You cannot change the value of the CONFIG_INSTANCE_DIR environment variable. By default, the installation directory has the following sub-directories: Sub-directory Contents <install_dir> /bin Binaries and scripts needed to run the broker. <install_dir> /etc Configuration files. <install_dir> /data The broker journal. <install_dir> /lib JARs and libraries needed to run the broker. <install_dir> /log Broker log files. <install_dir> /tmp Temporary web application files. When the Init Container has finished generating the broker configuration, the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. When the broker Pod is initialized and running, the broker configuration is located in the /home/jboss/amq-broker directory (and subdirectories) of the broker. Additional resources For more information about how the Operator chooses a container image for the built-in Init Container, see Section 2.4, "How the Operator chooses container images" . To learn how to build and specify a custom Init Container image, see Section 4.7, "Specifying a custom Init Container image" . 4.2. Configuring addresses and queues for Operator-based broker deployments For an Operator-based broker deployment, you use two separate Custom Resource (CR) instances to configure address and queues and their associated settings. To create address and queues on your brokers, you deploy a CR instance based on the address Custom Resource Definition (CRD). If you used the OpenShift command-line interface (CLI) to install the Operator, the address CRD is the broker_activemqartemisaddress_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted. If you used OperatorHub to install the Operator, the address CRD is the ActiveMQAretmisAddress CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console. To configure address and queue settings that you then match to specific addresses, you include configuration in the main Custom Resource (CR) instance used to create your broker deployment . If you used the OpenShift CLI to install the Operator, the main broker CRD is the broker_activemqartemis_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted. If you used OperatorHub to install the Operator, the main broker CRD is the ActiveMQAretmis CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console. In general, the address and queue settings that you can configure for a broker deployment on OpenShift Container Platform are fully equivalent to those of standalone broker deployments on Linux or Windows. However, you should be aware of some differences in how those settings are configured. Those differences are described in the following sub-section. 4.2.1. Differences in configuration of address and queue settings between OpenShift and standalone broker deployments To configure address and queue settings for broker deployments on OpenShift Container Platform, you add configuration to an addressSettings section of the main Custom Resource (CR) instance for the broker deployment. This contrasts with standalone deployments on Linux or Windows, for which you add configuration to an address-settings element in the broker.xml configuration file. The format used for the names of configuration items differs between OpenShift Container Platform and standalone broker deployments. For OpenShift Container Platform deployments, configuration item names are in camel case , for example, defaultQueueRoutingType . By contrast, configuration item names for standalone deployments are in lower case and use a dash ( - ) separator, for example, default-queue-routing-type . The following table shows some further examples of this naming difference. Configuration item for standalone broker deployment Configuration item for OpenShift broker deployment address-full-policy addressFullPolicy auto-create-queues autoCreateQueues default-queue-routing-type defaultQueueRoutingType last-value-queue lastValueQueue Additional resources For examples of creating addresses and queues and matching settings for OpenShift Container Platform broker deployments, see: Creating addresses and queues for a broker deployment on OpenShift Container Platform Matching address settings to configured addresses for a broker deployment on OpenShift Container Platform To learn about all of the configuration options for addresses, queues, and address settings for OpenShift Container Platform broker deployments, see Section 8.1, "Custom Resource configuration reference" . For comprehensive information about configuring addresses, queues, and associated address settings for standalone broker deployments, see Configuring addresses and queues in Configuring AMQ Broker . You can use this information to create equivalent configurations for broker deployments on OpenShift Container Platform. 4.2.2. Creating addresses and queues for an Operator-based broker deployment The following procedure shows how to use a Custom Resource (CR) instance to add an address and associated queue to an Operator-based broker deployment. Note To create multiple addresses and/or queues in your broker deployment, you need to create separate CR files and deploy them individually, specifying new address and/or queue names in each case. In addition, the name attribute of each CR instance must be unique. Prerequisites You must have already installed the AMQ Broker Operator, including the dedicated Custom Resource Definition (CRD) required to create addresses and queues on your brokers. For information on two alternative ways to install the Operator, see: Section 3.2, "Installing the Operator using the CLI" . Section 3.3, "Installing the Operator using OperatorHub" . You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Procedure Start configuring a Custom Resource (CR) instance to define addresses and queues for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisAddresss CRD. Click the Instances tab. Click Create ActiveMQArtemisAddress . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to define an address, queue, and routing type. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: myAddressDeployment0 namespace: myProject spec: ... addressName: myAddress0 queueName: myQueue0 routingType: anycast ... The preceding configuration defines an address named myAddress0 with a queue named myQueue0 and an anycast routing type. Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . (Optional) To delete an address and queue previously added to your deployment using a CR instance, use the following command: USD oc delete -f <path/to/address_custom_resource_instance> .yaml 4.2.3. Matching address settings to configured addresses in an Operator-based broker deployment If delivery of a message to a client is unsuccessful, you might not want the broker to make ongoing attempts to deliver the message. To prevent infinite delivery attempts, you can define a dead letter address and an associated dead letter queue . After a specified number of delivery attempts, the broker removes an undelivered message from its original queue and sends the message to the configured dead letter address. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages. The following example shows how to configure a dead letter address and queue for an Operator-based broker deployment. The example demonstrates how to: Use the addressSetting section of the main broker Custom Resource (CR) instance to configure address settings. Match those address settings to addresses in your broker deployment. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . You should be familiar with the default address settings configuration that the Operator merges or replaces with the configuration specified in your CR instance. For more information, see Section 4.1.1, "How the Operator generates the address settings configuration" . Procedure Start configuring a CR instance to add a dead letter address and queue to receive undelivered messages for each broker in the deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisAddresss CRD. Click the Instances tab. Click Create ActiveMQArtemisAddress . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to specify a dead letter address and queue to receive undelivered messages. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: ex-aaoaddress spec: ... addressName: myDeadLetterAddress queueName: myDeadLetterQueue routingType: anycast ... The preceding configuration defines a dead letter address named myDeadLetterAddress with a dead letter queue named myDeadLetterQueue and an anycast routing type. Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the address CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the address CR. Using the OpenShift web console: When you have finished configuring the CR, click Create . Start configuring a Custom Resource (CR) instance for a broker deployment. From a sample CR file: Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. In the deploymentPlan section of the CR, add a new addressSettings section that contains a single addressSetting section, as shown below. spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: Add a single instance of the match property to the addressSetting block. Specify an address-matching expression. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress match Specifies the address, or set of address to which the broker applies the configuration that follows. In this example, the value of the match property corresponds to a single address called myAddress . Add properties related to undelivered messages and specify values. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 deadLetterAddress Address to which the broker sends undelivered messages. maxDeliveryAttempts Maximum number of delivery attempts that a broker makes before moving a message to the configured dead letter address. In the preceding example, if the broker makes five unsuccessful attempts to deliver a message to an address that begins with myAddress , the broker moves the message to the specified dead letter address, myDeadLetterAddress . (Optional) Apply similar configuration to another address or set of addresses. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3 In this example, the value of the second match property includes an asterisk wildcard character. The wildcard character means that the preceding configuration is applied to any address that begins with the string myOtherAddresses . Note If you use a wildcard expression as a value for the match property, you must enclose the value in single quotation marks, for example, 'myOtherAddresses*' . At the beginning of the addressSettings section, add the applyRule property and specify a value. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: applyRule: merge_all addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3 The applyRule property specifies how the Operator applies the configuration that you add to the CR for each matching address or set of addresses. The values that you can specify are: merge_all For address settings specified in both the CR and the default configuration that match the same address or set of addresses: Replace any property values specified in the default configuration with those specified in the CR. Keep any property values that are specified uniquely in the CR or the default configuration. Include each of these in the final, merged configuration. For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. merge_replace For address settings specified in both the CR and the default configuration that match the same address or set of addresses, include the settings specified in the CR in the final, merged configuration. Do not include any properties specified in the default configuration, even if these are not specified in the CR. For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. replace_all Replace all address settings specified in the default configuration with those specified in the CR. The final, merged configuration corresponds exactly to that specified in the CR. Note If you do not explicitly include the applyRule property in your CR, the Operator uses a default value of merge_all . Deploy the broker CR instance. Using the OpenShift command-line interface: Save the CR file. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources To learn about all of the configuration options for addresses, queues, and address settings for OpenShift Container Platform broker deployments, see Section 8.1, "Custom Resource configuration reference" . If you installed the AMQ Broker Operator using the OpenShift command-line interface (CLI), the installation archive that you downloaded and extracted contains some additional examples of configuring address settings. In the deploy/examples folder of the installation archive, see: artemis-basic-address-settings-deployment.yaml artemis-merge-replace-address-settings-deployment.yaml artemis-replace-address-settings-deployment.yaml For comprehensive information about configuring addresses, queues, and associated address settings for standalone broker deployments, see Configuring addresses and queues in Configuring AMQ Broker . You can use this information to create equivalent configurations for broker deployments on OpenShift Container Platform. For more information about Init Containers in OpenShift Container Platform, see Using Init Containers to perform tasks before a pod is deployed in the OpenShift Container Platform documentation. 4.3. Creating a security configuration for an Operator-based broker deployment 4.3.1. Creating a security configuration for an Operator-based broker deployment The following procedure shows how to use a Custom Resource (CR) instance to add users and associated security configuration to an Operator-based broker deployment. Prerequisites You must have already installed the AMQ Broker Operator. For information on two alternative ways to install the Operator, see: Section 3.2, "Installing the Operator using the CLI" . Section 3.3, "Installing the Operator using OperatorHub" . You should be familiar with broker security as described in Securing brokers You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Procedure You can deploy the security CR before or after you create a broker deployment. However, if you deploy the security CR after creating the broker deployment, the broker pod is restarted to accept the new configuration. Start configuring a Custom Resource (CR) instance to define users and associated security configuration for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemissecurity_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisSecurity CRD. Click the Instances tab. Click Create ActiveMQArtemisSecurity . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to define users and roles. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: "prop-module" users: - name: "sam" password: "samspassword" roles: - "sender" - name: "rob" password: "robspassword" roles: - "receiver" securityDomains: brokerDomain: name: "activemq" loginModules: - name: "prop-module" flag: "sufficient" securitySettings: broker: - match: "#" permissions: - operationType: "send" roles: - "sender" - operationType: "createAddress" roles: - "sender" - operationType: "createDurableQueue" roles: - "sender" - operationType: "consume" roles: - "receiver" ... Note Always specify values for the elements in the preceding example. For example, if you do not specify values for securityDomains.brokerDomain or values for roles, the resulting configuration might cause unexpected results. The preceding configuration defines two users: a propertiesLoginModule named prop-module that defines a user named sam with a role named sender . a propertiesLoginModule named prop-module that defines a user named rob with a role named receiver . The properties of these roles are defined in the brokerDomain and broker sections of the securityDomains section. For example, the send role is defined to allow users with that role to create a durable queue on any address. By default, the configuration applies to all deployed brokers defined by CRs in the current namespace. To limit the configuration to particular broker deployments, use the applyToCrNames option described in Section 8.1.3, "Security Custom Resource configuration reference" . Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources Section 8.1.3, "Security Custom Resource configuration reference" Section 3.4.1, "Deploying a basic broker instance" 4.3.2. Storing user passwords in a secret In the Creating a security configuration for an Operator-based broker deployment procedure, user passwords are stored in clear text in the ActiveMQArtemisSecurity CR. If you do not want to store passwords in clear text in the CR, you can exclude the passwords from the CR and store them in a secret. When you apply the CR, the Operator retrieves each user's password from the secret and inserts it in the artemis-users.properties file on the broker pod. Procedure Use the oc create secret command to create a secret and add each user's name and password. The secret name must follow a naming convention of security-properties- module name , where module name is the name of the login module configured in the CR. For example: In the spec section of the CR, add the user names that you specified in the secret along with the role information, but do not include each user's password. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: "prop-module" users: - name: "sam" roles: - "sender" - name: "rob" roles: - "receiver" securityDomains: brokerDomain: name: "activemq" loginModules: - name: "prop-module" flag: "sufficient" securitySettings: broker: - match: "#" permissions: - operationType: "send" roles: - "sender" - operationType: "createAddress" roles: - "sender" - operationType: "createDurableQueue" roles: - "sender" - operationType: "consume" roles: - "receiver" ... Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the CR instance. Using the OpenShift web console: When you finish configuring the CR, click Create . Additional resources For more information about secrets in OpenShift Container Platform, see Providing sensitive data to pods in the OpenShift Container Platform documentation. 4.4. Configuring broker storage requirements To use persistent storage in an Operator-based broker deployment, you set persistenceEnabled to true in the Custom Resource (CR) instance used to create the deployment. If you do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator using a Persistent Volume Claim (PVC). If you want to create a cluster of two brokers with persistent storage, for example, then you need to have two PVs available. Important When you manually provision PVs in OpenShift Container Platform, ensure that you set the reclaim policy for each PV to Retain . If the reclaim policy for a PV is not set to Retain and the PVC that the Operator used to claim the PV is deleted, the PV is also deleted. Deleting a PV results in the loss of any data on the volume. For more information, about setting the reclaim policy, see Understanding persistent storage in the OpenShift Container Platform documentation. By default, a PVC obtains 2 GiB of storage for each broker from the default storage class configured for the cluster. You can override the default size and storage class requested in the PVC, but only by configuring new values in the CR before deploying the CR for the first time. 4.4.1. Configuring broker storage size and storage class The following procedure shows how to configure the Custom Resource (CR) instance for your broker deployment to specify the size and storage class of the Persistent Volume Claim (PVC) required by each broker for persistent message storage. Important You must add the configuration for broker storage size and storage class to the main CR for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . You must have already provisioned Persistent Volumes (PVs) and made these available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage, you need to have two PVs available. For more information about provisioning persistent storage, see Understanding persistent storage in the OpenShift Container Platform documentation. Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . To specify the broker storage size, in the deploymentPlan section of the CR, add a storage section. Add a size property and specify a value. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi storage.size Size, in bytes, of the Persistent Volume Claim (PVC) that each broker Pod requires for persistent storage. This property applies only when persistenceEnabled is set to true . The value that you specify must include a unit using byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). To specify the storage class that each broker Pod requires for persistent storage, in the storage section, add a storageClassName property and specify a value. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi storageClassName: gp3 storage.storageClassName The name of the storage class to request in the Persistent Volume Claim (PVC). Storage classes provide a way for administrators to describe and classify the available storage. For example, different storage classes might map to specific quality-of-service levels, backup policies and so on. If you do do not specify a storage class, a persistent volume with the default storage class configured for the cluster is claimed by the PVC. Note If you specify a storage class, a persistent volume is claimed by the PVC only if the volume's storage class matches the specified storage class. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . 4.5. Configuring resource limits and requests for Operator-based broker deployments When you create an Operator-based broker deployment, the broker Pods in the deployment run in a StatefulSet on a node in your OpenShift cluster. You can configure the Custom Resource (CR) instance for the deployment to specify the host-node compute resources used by the broker container that runs in each Pod. By specifying limit and request values for CPU and memory (RAM), you can ensure satisfactory performance of the broker Pods. Important You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment. The Operator runs a type of container called an Init Container when initializing each broker Pod. Any resource limits and requests that you configure for each broker container also apply to each Init Container. For more information about the use of Init Containers in broker deployments, see Section 4.1, "How the Operator generates the broker configuration" . You can specify the following limit and request values: CPU limit For each broker container running in a Pod, this value is the maximum amount of host-node CPU that the container can consume. If a broker container attempts to exceed the specified CPU limit, OpenShift throttles the container. This ensures that containers have consistent performance, regardless of the number of Pods running on a node. Memory limit For each broker container running in a Pod, this value is the maximum amount of host-node memory that the container can consume. If a broker container attempts to exceed the specified memory limit, OpenShift terminates the container. The broker Pod restarts. CPU request For each broker container running in a Pod, this value is the amount of host-node CPU that the container requests. The OpenShift scheduler considers the CPU request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources. The CPU request value is the minimum amount of CPU that the broker container requires to run. However, if there is no contention for CPU on the node, the container can use all available CPU. If you have specified a CPU limit, the container cannot exceed that amount of CPU usage. If there is CPU contention on the node, CPU request values provide a way for OpenShift to weigh CPU usage across all containers. Memory request For each broker container running in a Pod, this value is the amount of host-node memory that the container requests. The OpenShift scheduler considers the memory request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources. The memory request value is the minimum amount of memory that the broker container requires to run. However, the container can consume as much available memory as possible. If you have specified a memory limit, the broker container cannot exceed that amount of memory usage. CPU is measured in units called millicores. Each node in an OpenShift cluster inspects the operating system to determine the number of CPU cores on the node. Then, the node multiplies that value by 1000 to express the total capacity. For example, if a node has two cores, the CPU capacity of the node is expressed as 2000m . Therefore, if you want to use one-tenth of a single core, you specify a value of 100m . Memory is measured in bytes. You can specify the value using byte notation (E, P, T, G, M, K) or the binary equivalents (Ei, Pi, Ti, Gi, Mi, Ki). The value that you specify must include a unit. 4.5.1. Configuring broker resource limits and requests The following example shows how to configure the main Custom Resource (CR) instance for your broker deployment to set limits and requests for CPU and memory for each broker container that runs in a Pod in the deployment. Important You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . In the deploymentPlan section of the CR, add a resources section. Add limits and requests sub-sections. In each sub-section, add a cpu and memory property and specify values. For example: spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true resources: limits: cpu: "500m" memory: "1024M" requests: cpu: "250m" memory: "512M" limits.cpu Each broker container running in a Pod in the deployment cannot exceed this amount of host-node CPU usage. limits.memory Each broker container running in a Pod in the deployment cannot exceed this amount of host-node memory usage. requests.cpu Each broker container running in a Pod in the deployment requests this amount of host-node CPU. This value is the minimum amount of CPU required for the broker container to run. requests.memory Each broker container running in a Pod in the deployment requests this amount of host-node memory. This value is the minimum amount of memory required for the broker container to run. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . 4.6. Overriding the default memory limit for a broker You can override the default memory limit that is set for a broker. By default, a broker is assigned half of the maximum memory that is available to the broker's Java Virtual Machine. The following procedure shows how to configure the Custom Resource (CR) instance for your broker deployment to override the default memory limit. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Start configuring a Custom Resource (CR) instance to create a basic broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For example, the CR for a basic broker deployment might resemble the following: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true In the spec section of the CR, add a brokerProperties section. Within the brokerProperties section, add a globalMaxSize property and specify a memory limit. For example: spec: ... brokerProperties: - globalMaxSize=500m ... The default unit for the globalMaxSize property is bytes. To change the default unit, add a suffix of m (for MB) or g (for GB) to the value. Apply the changes to the CR. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Apply the CR. Using the OpenShift web console: When you finish editing the CR, click Save . (Optional) Verify that the new value you set for the globalMaxSize property overrides the default memory limit assigned to the broker. Connect to the AMQ Management Console. For more information, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . From the menu, select JMX . Select org.apache.activemq.artemis . Search for global . In the table that is displayed, confirm that the value in the Global max column is the same as the value that you configured for the globalMaxSize property. 4.7. Specifying a custom Init Container image As described in Section 4.1, "How the Operator generates the broker configuration" , the AMQ Broker Operator uses a default, built-in Init Container to generate the broker configuration. To generate the configuration, the Init Container uses the main Custom Resource (CR) instance for your deployment. The only items that you can specify in the CR are those that are exposed in the main broker Custom Resource Definition (CRD). However, there might a case where you need to include configuration that is not exposed in the CRD. In this case, in your main CR instance, you can specify a custom Init Container. The custom Init Container can modify or add to the configuration that has already been created by the Operator. For example, you might use a custom Init Container to modify the broker logging settings. Or, you might use a custom Init Container to include extra runtime dependencies (that is, .jar files) in the broker installation directory. When you build a custom Init Container image, you must follow these important guidelines: In the build script (for example, a Docker Dockerfile or Podman Containerfile) that you create for the custom image, the FROM instruction must specify the latest version of the AMQ Broker Operator built-in Init Container as the base image. In your script, include the following line: The custom image must include a script called post-config.sh that you include in a directory called /amq/scripts . The post-config.sh script is where you can modify or add to the initial configuration that the Operator generates. When you specify a custom Init Container, the Operator runs the post-config.sh script after it uses your CR instance to generate a configuration, but before it starts the broker application container. As described in Section 4.1.2, "Directory structure of a broker Pod" , the path to the installation directory used by the Init Container is defined in an environment variable called CONFIG_INSTANCE_DIR . The post-config.sh script should use this environment variable name when referencing the installation directory (for example, USD{CONFIG_INSTANCE_DIR}/lib ) and not the actual value of this variable (for example, /amq/init/config/lib ). If you want to include additional resources (for example, .xml or .jar files) in your custom broker configuration, you must ensure that these are included in the custom image and accessible to the post-config.sh script. The following procedure describes how to specify a custom Init Container image. Prerequisites You must have built a custom Init Container image that meets the guidelines described above. For a complete example of building and specifying a custom Init Container image for the ArtemisCloud Operator, see custom Init Container image for JDBC-based persistence . To provide a custom Init Container image for the AMQ Broker Operator, you need to be able to add the image to a repository in a container registry such as the Quay container registry . You should understand how the Operator uses an Init Container to generate the broker configuration. For more information, see Section 4.1, "How the Operator generates the broker configuration" . You should be familiar with how to use a CR to create a broker deployment. For more information, see Section 3.4, "Creating Operator-based broker deployments" . Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . In the deploymentPlan section of the CR, add the initImage property. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder initImage: requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Set the value of the initImage property to the URL of your custom Init Container image. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder initImage: <custom_init_container_image_url> requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true initImage Specifies the full URL for your custom Init Container image, which you must have added to repository in a container registry. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For a complete example of building and specifying a custom Init Container image for the ArtemisCloud Operator, see custom Init Container image for JDBC-based persistence . 4.8. Configuring Operator-based broker deployments for client connections 4.8.1. Configuring acceptors To enable client connections to broker Pods in your OpenShift deployment, you define acceptors for your deployment. Acceptors define how a broker Pod accepts connections. You define acceptors in the main Custom Resource (CR) used for your broker deployment. When you create an acceptor, you specify information such as the messaging protocols to enable on the acceptor, and the port on the broker Pod to use for these protocols. The following procedure shows how to define a new acceptor in the CR for your broker deployment. Procedure In the deploy/crs directory of the Operator archive that you downloaded and extracted during your initial installation, open the broker_activemqartemis_cr.yaml Custom Resource (CR) file. In the acceptors element, add a named acceptor. Add the protocols and port parameters. Set values to specify the messaging protocols to be used by the acceptor and the port on each broker Pod to expose for those protocols. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 ... The configured acceptor exposes port 5672 to AMQP clients. The full set of values that you can specify for the protocols parameter is shown in the table. Protocol Value Core Protocol core AMQP amqp OpenWire openwire MQTT mqtt STOMP stomp All supported protocols all Note For each broker Pod in your deployment, the Operator also creates a default acceptor that uses port 61616. This default acceptor is required for broker clustering and has Core Protocol enabled. By default, the AMQ Broker management console uses port 8161 on the broker Pod. Each broker Pod in your deployment has a dedicated Service that provides access to the console. For more information, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . To use another protocol on the same acceptor, modify the protocols parameter. Specify a comma-separated list of protocols. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 ... The configured acceptor now exposes port 5672 to AMQP and OpenWire clients. To specify the number of concurrent client connections that the acceptor allows, add the connectionsAllowed parameter and set a value. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 ... By default, an acceptor is exposed only to clients in the same OpenShift cluster as the broker deployment. To also expose the acceptor to clients outside OpenShift, add the expose parameter and set the value to true . spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true ... ... When you expose an acceptor to clients outside OpenShift, the Operator automatically creates a dedicated Service and Route for each broker Pod in the deployment. To enable secure connections to the acceptor from clients outside OpenShift, add the sslEnabled parameter and set the value to true . spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true ... ... When you enable SSL (that is, Secure Sockets Layer) security on an acceptor (or connector), you can add related configuration, such as: The secret name used to store authentication credentials in your OpenShift cluster. A secret is required when you enable SSL on the acceptor. For more information on generating this secret, see Section 4.8.2, "Securing broker-client connections" . The Transport Layer Security (TLS) protocols to use for secure network communication. TLS is an updated, more secure version of SSL. You specify the TLS protocols in the enabledProtocols parameter. Whether the acceptor uses two-way TLS, also known as mutual authentication , between the broker and the client. You specify this by setting the value of the needClientAuth parameter to true . Additional resources To learn how to configure TLS to secure broker-client connections, including generating a secret to store authentication credentials, see Section 4.8.2, "Securing broker-client connections" . For a complete Custom Resource configuration reference, including configuration of acceptors and connectors, see Section 8.1, "Custom Resource configuration reference" . 4.8.2. Securing broker-client connections If you have enabled security on your acceptor or connector (that is, by setting sslEnabled to true ), you must configure Transport Layer Security (TLS) to allow certificate-based authentication between the broker and clients. TLS is an updated, more secure version of SSL. There are two primary TLS configurations: One-way TLS Only the broker presents a certificate. The certificate is used by the client to authenticate the broker. This is the most common configuration. Two-way TLS Both the broker and the client present certificates. This is sometimes called mutual authentication . The sections that follow describe: Configuration requirements for the broker certificate used by one-way and two-way TLS How to configure one-way TLS How to configure two-way TLS For both one-way and two-way TLS, you complete the configuration by generating a secret that stores the credentials required for a successful TLS handshake between the broker and the client. This is the secret name that you must specify in the sslSecret parameter of your secured acceptor or connector. The secret must contain a Base64-encoded broker key store (both one-way and two-way TLS), a Base64-encoded broker trust store (two-way TLS only), and the corresponding passwords for these files, also Base64-encoded. The one-way and two-way TLS configuration procedures show how to generate this secret. Note If you do not explicitly specify a secret name in the sslSecret parameter of a secured acceptor or connector, the acceptor or connector assumes a default secret name. The default secret name uses the format <custom_resource_name> - <acceptor_name> -secret or <custom_resource_name> - <connector_name> -secret . For example, my-broker-deployment-my-acceptor-secret . Even if the acceptor or connector assumes a default secrete name, you must still generate this secret yourself. It is not automatically created. 4.8.2.1. Configuring a broker certificate for host name verification Note This section describes some requirements for the broker certificate that you must generate when configuring one-way or two-way TLS. When a client tries to connect to a broker Pod in your deployment, the verifyHost option in the client connection URL determines whether the client compares the Common Name (CN) of the broker's certificate to its host name, to verify that they match. The client performs this verification if you specify verifyHost=true or similar in the client connection URL. You might omit this verification in rare cases where you have no concerns about the security of the connection, for example, if the brokers are deployed on an OpenShift cluster in an isolated network. Otherwise, for a secure connection, it is advisable for a client to perform this verification. In this case, correct configuration of the broker key store certificate is essential to ensure successful client connections. In general, when a client is using host verification, the CN that you specify when generating the broker certificate must match the full host name for the Route on the broker Pod that the client is connecting to. For example, if you have a deployment with a single broker Pod, the CN might look like the following: To ensure that the CN can resolve to any broker Pod in a deployment with multiple brokers, you can specify an asterisk ( * ) wildcard character in place of the ordinal of the broker Pod. For example: The CN shown in the preceding example successfully resolves to any broker Pod in the my-broker-deployment deployment. In addition, the Subject Alternative Name (SAN) that you specify when generating the broker certificate must individually list all broker Pods in the deployment, as a comma-separated list. For example: 4.8.2.2. Configuring one-way TLS The procedure in this section shows how to configure one-way Transport Layer Security (TLS) to secure a broker-client connection. In one-way TLS, only the broker presents a certificate. This certificate is used by the client to authenticate the broker. Prerequisites You should understand the requirements for broker certificate generation when clients use host name verification. For more information, see Section 4.8.2.1, "Configuring a broker certificate for host name verification" . Procedure Generate a self-signed certificate for the broker key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem On the client, create a client trust store that imports the broker certificate. USD keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem Log in to OpenShift Container Platform as an administrator. For example: USD oc login -u system:admin Switch to the project that contains your broker deployment. For example: USD oc project <my_openshift_project> Create a secret to store the TLS credentials. For example: USD oc create secret generic my-tls-secret \ --from-file=broker.ks=~/broker.ks \ --from-file=client.ts=~/client.ks \ --from-literal=keyStorePassword= <password> \ --from-literal=trustStorePassword= <password> Note When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts . For one-way TLS between the broker and a client, a trust store is not actually required. However, to successfully generate the secret, you need to specify some valid store file as a value for client.ts . The preceding step provides a "dummy" value for client.ts by reusing the previously-generated broker key store file. This is sufficient to generate a secret with all of the credentials required for one-way TLS. Link the secret to the service account that you created when installing the Operator. For example: USD oc secrets link sa/amq-broker-operator secret/my-tls-secret Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5 ... 4.8.2.3. Configuring two-way TLS The procedure in this section shows how to configure two-way Transport Layer Security (TLS) to secure a broker-client connection. In two-way TLS, both the broker and client presents certificates. The broker and client use these certificates to authenticate each other in a process sometimes called mutual authentication . Prerequisites You should understand the requirements for broker certificate generation when clients use host name verification. For more information, see Section 4.8.2.1, "Configuring a broker certificate for host name verification" . Procedure Generate a self-signed certificate for the broker key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem On the client, create a client trust store that imports the broker certificate. USD keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem On the client, generate a self-signed certificate for the client key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks On the client, export the certificate from the client key store, so that it can be shared with the broker. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem Create a broker trust store that imports the client certificate. USD keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem Log in to OpenShift Container Platform as an administrator. For example: USD oc login -u system:admin Switch to the project that contains your broker deployment. For example: USD oc project <my_openshift_project> Create a secret to store the TLS credentials. For example: USD oc create secret generic my-tls-secret \ --from-file=broker.ks=~/broker.ks \ --from-file=client.ts=~/broker.ts \ --from-literal=keyStorePassword= <password> \ --from-literal=trustStorePassword= <password> Note When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts . For two-way TLS between the broker and a client, you must generate a secret that includes the broker trust store, because this holds the client certificate. Therefore, in the preceding step, the value that you specify for the client.ts key is actually the broker trust store file. Link the secret to the service account that you created when installing the Operator. For example: USD oc secrets link sa/amq-broker-operator secret/my-tls-secret Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5 ... 4.8.3. Networking services in your broker deployments On the Networking pane of the OpenShift Container Platform web console for your broker deployment, there are two running services; a headless service and a ping service. The default name of the headless service uses the format <custom_resource_name> -hdls-svc , for example, my-broker-deployment-hdls-svc . The default name of the ping service uses a format of <custom_resource_name> -ping-svc , for example, `my-broker-deployment-ping-svc . The headless service provides access to port 61616, which is used for internal broker clustering. The ping service is used by the brokers for discovery, and enables brokers to form a cluster within the OpenShift environment. Internally, this service exposes port 8888. 4.8.4. Connecting to the broker from internal and external clients The examples in this section show how to connect to the broker from internal clients (that is, clients in the same OpenShift cluster as the broker deployment) and external clients (that is, clients outside the OpenShift cluster). 4.8.4.1. Connecting to the broker from internal clients To connect an internal client to a broker, in the client connection details, specify the DNS resolvable name of the broker pod. For example: If the internal client is using the Core protocol and the useTopologyForLoadBalancing=false key was not set in the connection URL, after the client connects to the broker for the first time, the broker can inform the client of the addresses of all the brokers in the cluster. The client can then load balance connections across all brokers. If your brokers have durable subscription queues or request/reply queues, be aware of the caveats associated with using these queues when client connections are load balanced. For more information, see Section 4.8.4.4, "Caveats to load balancing client connections when you have durable subscription queues or reply/request queues" . 4.8.4.2. Connecting to the broker from external clients When you expose an acceptor to external clients (that is, by setting the value of the expose parameter to true ), the Operator automatically creates a dedicated service and route for each broker pod in the deployment. An external client can connect to the broker by specifying the full host name of the route created for the broker pod. You can use a basic curl command to test external access to this full host name. For example: The full host name of the route for the broker pod must resolve to the node that is hosting the OpenShift router. The OpenShift router uses the host name to determine where to send the traffic inside the OpenShift internal network. By default, the OpenShift router listens to port 80 for non-secured (that is, non-SSL) traffic and port 443 for secured (that is, SSL-encrypted) traffic. For an HTTP connection, the router automatically directs traffic to port 443 if you specify a secure connection URL (that is, https ), or to port 80 if you specify a non-secure connection URL (that is, http ). If you want external clients to load balance connections across the brokers in the cluster: Enable load balancing by configuring the haproxy.router.openshift.io/balance roundrobin option on the OpenShift route for each broker pod. If the external client uses the Core protocol, by default, the useTopologyForLoadBalancing configuration option is set to true . Make sure that this value is not set to false in the connection URL. If your brokers have durable subscription queues or request/reply queues, be aware of the caveats associated with using these queues when load balancing client connections. For more information, see Section 4.8.4.4, "Caveats to load balancing client connections when you have durable subscription queues or reply/request queues" . If you don't want external clients to load balance connections across the brokers in the cluster: Set the useTopologyForLoadBalancing=false key in the connection URL that each client uses. In each client's connection URL, specify the full host name of the route for each broker pod. The client attempts to connect to the first host name in the connection URL. However, if the first host name is unavailable, the client automatically connects to the host name in the connection URL, and so on. For non-HTTP connections: Clients must explicitly specify the port number (for example, port 443) as part of the connection URL. For one-way TLS, the client must specify the path to its trust store and the corresponding password, as part of the connection URL. For two-way TLS, the client must also specify the path to its key store and the corresponding password, as part of the connection URL. Some example client connection URLs, for supported messaging protocols, are shown below. External Core client, using one-way TLS Note The useTopologyForLoadBalancing key is explicitly set to false in the connection URL because an external Core client cannot use topology information returned by the broker. If this key is set to true or you do not specify a value, it results in a DEBUG log message. External Core client, using two-way TLS External OpenWire client, using one-way TLS External OpenWire client, using two-way TLS External AMQP client, using one-way TLS External AMQP client, using two-way TLS 4.8.4.3. Connecting to the Broker using a NodePort As an alternative to using a route, an OpenShift administrator can configure a NodePort to connect to a broker pod from a client outside OpenShift. The NodePort should map to one of the protocol-specific ports specified by the acceptors configured for the broker. By default, NodePorts are in the range 30000 to 32767, which means that a NodePort typically does not match the intended port on the broker Pod. To connect from a client outside OpenShift to the broker via a NodePort, you specify a URL in the format <protocol> :// <ocp_node_ip> : <node_port_number> . 4.8.4.4. Caveats to load balancing client connections when you have durable subscription queues or reply/request queues Durable subscriptions A durable subscription is represented as a queue on a broker and is created when a durable subscriber first connects to the broker. This queue exists and receives messages until the client unsubscribes. If the client reconnects to a different broker, another durable subscription queue is created on that broker. This can cause the following issues. Issue Mitigation Messages may get stranded in the original subscription queue. Ensure that message redistribution is enabled. For more information, see Enabling message redistribution . Messages may be received in the wrong order as there is a window during message redistribution when other messages are still routed. None. When a client unsubscribes, it deletes the queue only on the broker it last connected to. This means that the other queues can still exist and receive messages. To delete other empty queues that may exist for a client that unsubscribed, configure both of the following properties: Set the auto-delete-queues-message-count property to 0 so that a queue can only be deleted if there are no messages in the queue. Set the auto-delete-queues-delay property to delete a queue that has no messages after it has not been used for a specified number of milliseconds. For more information, see Configuring automatic creation and deletion of addresses and queues . Request/Reply queues When a JMS Producer creates a temporary reply queue, the queue is created on the broker. If the client that is consuming from the work queue and replying to the temporary queue connects to a different broker, the following issues can occur. Issue Mitigation Since the reply queue does not exist on the broker that the client is connected to, the client may generate an error. Ensure that the auto-create-queues property is set to true . For more information, see Configuring automatic creation and deletion of addresses and queues . Messages sent to the work queue may not be distributed. Ensure that messages are load balanced on demand by setting the message-load-balancing property to ON-DEMAND . Also, ensure that message redistribution is enabled. For more information, see Enabling message redistribution . Additional resources For more information about using methods such as Routes and NodePorts for communicating from outside an OpenShift cluster with services running in the cluster, see: Configuring ingress cluster traffic overview in the OpenShift Container Platform documentation. 4.9. Configuring large message handling for AMQP messages Clients might send large AMQP messages that can exceed the size of the broker's internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, the broker stores the messages in a dedicated directory used for storing large message files. For a broker deployment on OpenShift Container Platform, the large messages directory is /opt/ <custom_resource_name> /data/large-messages on the Persistent Volume (PV) used by the broker for message storage. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory. Important For Operator-based broker deployments in AMQ Broker 7.10, large message handling is available only for the AMQP protocol. 4.9.1. Configuring AMQP acceptors for large message handling The following procedure shows how to configure an acceptor to handle an AMQP message larger than a specified size as a large message. Prerequisites You should be familiar with how to configure acceptors for Operator-based broker deployments. See Section 4.8.1, "Configuring acceptors" . To store large AMQP messages in a dedicated large messages directory, your broker deployment must be using persistent storage (that is, persistenceEnabled is set to true in the Custom Resource (CR) instance used to create the deployment). For more information about configuring persistent storage, see: Section 2.5, "Operator deployment notes" Section 8.1, "Custom Resource configuration reference" Procedure Open the Custom Resource (CR) instance in which you previously defined an AMQP acceptor. Using the OpenShift command-line interface: USD oc edit -f <path/to/custom_resource_instance> .yaml Using the OpenShift Container Platform web console: In the left navigation menu, click Administration Custom Resource Definitions Click the ActiveMQArtemis CRD. Click the Instances tab. Locate the CR instance that corresponds to your project namespace. A previously-configured AMQP acceptor might resemble the following: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true ... Specify the minimum size, in bytes, of an AMQP message that the broker handles as a large message. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true amqpMinLargeMessageSize: 204800 ... ... In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize , if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. The broker stores the message in the large messages directory ( /opt/ <custom_resource_name> /data/large-messages , by default) on the persistent volume (PV) used by the broker for message storage. If you do not explicitly specify a value for the amqpMinLargeMessageSize property, the broker uses a default value of 102400 (that is, 100 kilobytes). If you set amqpMinLargeMessageSize to a value of -1 , large message handling for AMQP messages is disabled. 4.10. Configuring broker health checks You can configure periodic health checks on a running broker container by using liveness and readiness probes. A liveness probe checks if the broker is running by pinging the broker's HTTP port. A readiness probe checks if the broker can accept network traffic by opening a connection to each of the acceptor ports configured for the broker. A limitation of validating the broker's health by using basic liveness and readiness probes to open connections to HTTP and acceptor ports is that these checks are unable to identify underlying issues, for example, issues with the broker's file system. You can incorporate the broker's command-line utility, artemis , into a liveness or readiness probe configuration to create more comprehensive health checks that include sending messages to the broker. 4.10.1. Configuring liveness and readiness probes The following example shows how to configure the main Custom Resource (CR) instance for your broker deployment to run health checks by using liveness and readiness probes. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Create a CR instance. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. To configure a liveness probe, in the deploymentPlan section of the CR, add a livenessProbe section. For example: spec: deploymentPlan: livenessProbe: initialDelaySeconds: 5 periodSeconds: 5 initialDelaySeconds The delay, in seconds, before the probe runs after the container starts. The default is 5 . periodSeconds The interval, in seconds, at which the probe runs. The default is 5 . Note If you don't configure a liveness probe or if the handler is missing from a configured probe, the AMQ Operator creates a default TCP probe that has the following configuration. The default TCP probe attempts to open a socket to the broker container on the specified port. spec: deploymentPlan: livenessProbe: tcpSocket: port: 8181 initialDelaySeconds: 30 timeoutSeconds: 5 To configure a readiness probe, in the deploymentPlan section of the CR, add a readinessProbe section. For example: spec: deploymentPlan: readinessProbe: initialDelaySeconds: 5 periodSeconds: 5 If you don't configure a readiness probe, a built-in script checks if all acceptors can accept connections. If you want to configure more comprehensive health checks, add the artemis check command-line utility to the liveness or readiness probe configuration. If you want to configure a health check that creates a full client connection to the broker, in the livenessProbe or readinessProbe section, add an exec section. In the exec section, add a command section. In the command section, add the artemis check node command syntax. For example: spec: deploymentPlan: readinessProbe: exec: command: - bash - '-c' - /home/jboss/amq-broker/bin/artemis - check - node - '--silent' - '--acceptor' - < acceptor name > - '--user' - USDAMQ_USER - '--password' - USDAMQ_PASSWORD initialDelaySeconds: 30 timeoutSeconds: 5 By default, the artemis check node command uses the URI of an acceptor called artemis . If the broker has an acceptor called artemis , you can exclude the --acceptor <acceptor name> option from the command. Note USDAMQ_USER and USDAMQ_PASSWORD are environment variables that are configured by the AMQ Operator. If you want to configure a health check that produces and consumes messages, which also validates the health of the broker's file system, in the livenessProbe or readinessProbe section, add an exec section. In the exec section, add a command section. In the command section, add the artemis check queue command syntax. For example: spec: deploymentPlan: readinessProbe: exec: command: - bash - '-c' - /home/jboss/amq-broker/bin/artemis - check - queue - '--name' - livenessqueue - '--produce' - "1" - '--consume' - "1" - '--silent' - '--user' - USDAMQ_USER - '--password' - USDAMQ_PASSWORD initialDelaySeconds: 30 timeoutSeconds: 5 Note The queue name that you specify must be configured on the broker and have a routingType of anycast . For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: livenessqueue namespace: activemq-artemis-operator spec: addressName: livenessqueue queueConfiguration: purgeOnNoConsumers: false maxConsumers: -1 durable: true enabled: true queueName: livenessqueue routingType: anycast Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you finish configuring the CR, click Create . Additional resources For more information about liveness and readiness probes in OpenShift Container Platform, see Monitoring application health by using health checks in the OpenShift Container Platform documentation. 4.11. High availability and message migration 4.11.1. High availability The term high availability refers to a system that can remain operational even when part of that system fails or is shut down. For AMQ Broker on OpenShift Container Platform, this means ensuring the integrity and availability of messaging data if a broker Pod fails, or shuts down due to intentional scaledown of your deployment. To allow high availability for AMQ Broker on OpenShift Container Platform, you run multiple broker Pods in a broker cluster. Each broker Pod writes its message data to an available Persistent Volume (PV) that you have claimed for use with a Persistent Volume Claim (PVC). If a broker Pod fails or is shut down, the message data stored in the PV is migrated to another available broker Pod in the broker cluster. The other broker Pod stores the message data in its own PV. The following figure shows a StatefulSet-based broker deployment. In this case, the two broker Pods in the broker cluster are still running. When a broker Pod shuts down, the AMQ Broker Operator automatically starts a scaledown controller that performs the migration of messages to an another broker Pod that is still running in the broker cluster. This message migration process is also known as Pod draining . The section that follows describes message migration. 4.11.2. Message migration Message migration is how you ensure the integrity of messaging data when a broker in a clustered deployment shuts down due to an intentional scaledown of the deployment. Also known as Pod draining , this process refers to removal and redistribution of messages from a broker Pod that has shut down. Note The scaledown controller that performs message migration can operate only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects. To use message migration, you must have a minimum of two brokers in your deployment. A broker with two or more brokers is clustered by default. For an Operator-based broker deployment, you enable message migration by setting messageMigration to true in the main broker Custom Resource for your deployment. The message migration process follows these steps: When a broker Pod in the deployment shuts down due to an intentional scaledown of the deployment, the Operator automatically starts a scaledown controller to prepare for message migration. The scaledown controller runs in the same OpenShift project name as the broker cluster. The scaledown controller registers itself and listens for Kubernetes events that are related to Persistent Volume Claims (PVCs) in the project. To check for Persistent Volumes (PVs) that have been orphaned, the scaledown controller looks at the ordinal on the volume claim. The controller compares the ordinal on the volume claim to that of the broker Pods that are still running in the StatefulSet (that is, the broker cluster) in the project. If the ordinal on the volume claim is higher than the ordinal on any of the broker Pods still running in the broker cluster, the scaledown controller determines that the broker Pod at that ordinal has been shut down and that messaging data must be migrated to another broker Pod. The scaledown controller starts a drainer Pod. The drainer Pod runs the broker and executes the message migration. Then, the drainer Pod identifies an alternative broker Pod to which the orphaned messages can be migrated. Note There must be at least one broker Pod still running in your deployment for message migration to occur. The following figure illustrates how the scaledown controller (also known as a drain controller ) migrates messages to a running broker Pod. After the messages are successfully migrated to an operational broker Pod, the drainer Pod shuts down and the scaledown controller removes the PVC for the orphaned PV. The PV is returned to a "Released" state. Note If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which messaging data can be migrated. However, if you scale a deployment down to zero and then back up to a size that is smaller than the original deployment, drainer Pods are started for the brokers that remain shut down. Additional resources For an example of message migration when you scale down a broker deployment, see Migrating messages upon scaledown . 4.11.3. Migrating messages upon scaledown To migrate messages upon scaledown of your broker deployment, use the main broker Custom Resource (CR) to enable message migration. The AMQ Broker Operator automatically runs a dedicated scaledown controller to execute message migration when you scale down a clustered broker deployment. With message migration enabled, the scaledown controller within the Operator detects shutdown of a broker Pod and starts a drainer Pod to execute message migration. The drainer Pod connects to one of the other live broker Pods in the cluster and migrates messages to that live broker Pod. After migration is complete, the scaledown controller shuts down. Note A scaledown controller operates only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects. If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which the messaging data can be migrated. However, if you scale a deployment down to zero brokers and then back up to only some of the brokers that were in the original deployment, drainer Pods are started for the brokers that remain shut down. The following example procedure shows the behavior of the scaledown controller. Prerequisites You already have a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . You should understand how message migration works. For more information, see Section 4.11.2, "Message migration" . Procedure In the deploy/crs directory of the Operator repository that you originally downloaded and extracted, open the main broker CR, broker_activemqartemis_cr.yaml . In the main broker CR set messageMigration and persistenceEnabled to true . These settings mean that when you later scale down the size of your clustered broker deployment, the Operator automatically starts a scaledown controller and migrates messages to a broker Pod that is still running. In your existing broker deployment, verify which Pods are running. USD oc get pods You see output that looks like the following. The preceding output shows that there are three Pods running; one for the broker Operator itself, and a separate Pod for each broker in the deployment. Log into each Pod and send some messages to each broker. Supposing that Pod ex-aao-ss-0 has a cluster IP address of 172.17.0.6 , run the following command: USD /opt/amq/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin Supposing that Pod ex-aao-ss-1 has a cluster IP address of 172.17.0.7 , run the following command: USD /opt/amq/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin The preceding commands create a queue called TEST on each broker and add 1000 messages to each queue. Scale the cluster down from two brokers to one. Open the main broker CR, broker_activemqartemis_cr.yaml . In the CR, set deploymentPlan.size to 1 . At the command line, apply the change: USD oc apply -f deploy/crs/broker_activemqartemis_cr.yaml You see that the Pod ex-aao-ss-1 starts to shut down. The scaledown controller starts a new drainer Pod of the same name. This drainer Pod also shuts down after it migrates all messages from broker Pod ex-aao-ss-1 to the other broker Pod in the cluster, ex-aao-ss-0 . When the drainer Pod is shut down, check the message count on the TEST queue of broker Pod ex-aao-ss-0 . You see that the number of messages in the queue is 2000, indicating that the drainer Pod successfully migrated 1000 messages from the broker Pod that shut down. 4.12. Controlling placement of broker pods on OpenShift Container Platform nodes You can control the placement of AMQ Broker pods on OpenShift Container Platform nodes by using node selectors, tolerations, or affinity and anti-affinity rules. Node selectors A node selector allows you to schedule a broker pod on a specific node. Tolerations A toleration enables a broker pod to be scheduled on a node if the toleration matches a taint configured for the node. Without a matching pod toleration, a taint allows a node to refuse to accept a pod. Affinity/Anti-affinity Node affinity rules control which nodes a pod can be scheduled on based on the node's labels. Pod affinity and anti-affinity rules control which nodes a pod can be scheduled on based on the pods already running on that node. 4.12.1. Placing pods on specific nodes using node selectors A node selector specifies a key-value pair that requires the broker pod to be scheduled on a node that has matching key-value pair in the node label. The following example shows how to configure a node selector to schedule a broker pod on a specific node. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Add a label to the OpenShift Container Platform node on which you want to schedule the broker pod. For more information about adding node labels, see Using node selectors to control pod placement in the OpenShift Container Platform documentation. Procedure Create a Custom Resource (CR) instance based on the main broker CRD. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan section of the CR, add a nodeSelector section and add the node label that you want to match to select a node for the pod. For example: spec: deploymentPlan: nodeSelector: app: broker1 In this example, the broker pod is scheduled on a node that has a app: broker1 label. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For more information about node selectors in OpenShift Container Platform, see Placing pods on specific nodes using node selectors in the OpenShift Container Platform documentation. 4.12.2. Controlling pod placement using tolerations Taints and tolerations control whether pods can or cannot be scheduled on specific nodes. A taint allows a node to refuse to schedule a pod unless the pod has a matching toleration. You can use taints to exclude pods from a node so the node is reserved for specific pods, such as broker pods, that have a matching toleration. Having a matching toleration permits a broker pod to be scheduled on a node but does not guarantee that the pod is scheduled on that node. To guarantee that the broker pod is scheduled on the node that has a taint configured, you can configure affinity rules. For more information, see Section 4.12.3, "Controlling pod placement using affinity and anti-affinity rules" The following example shows how to configure a toleration to match a taint that is configured on a node. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Apply a taint to the nodes which you want to reserve for scheduling broker pods. A taint consists of a key, value, and effect. The taint effect determines if: existing pods on the node are evicted existing pods are allowed to remain on the node but new pods cannot be scheduled unless they have a matching toleration new pods can be scheduled on the node if necessary, but preference is to not schedule new pods on the node. For more information about applying taints, see Controlling pod placement using node taints in the OpenShift Container Platform documentation. Procedure Create a Custom Resource (CR) instance based on the main broker CRD. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan section of the CR, add a tolerations section. In the tolerations section, add a toleration for the node taint that you want to match. For example: spec: deploymentPlan: tolerations: - key: "app" value: "amq-broker" effect: "NoSchedule" In this example, the toleration matches a node taint of app=amq-broker:NoSchedule , so the pod can be scheduled on a node that has this taint configured. Note To ensure that the broker pods are scheduled correctly, do not specify a tolerationsSeconds attribute in the tolerations section of the CR. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For more information about taints and tolerations in OpenShift Container Platform, see Controlling pod placement using node taints in the OpenShift Container Platform documentation. 4.12.3. Controlling pod placement using affinity and anti-affinity rules You can control pod placement using node affinity, pod affinity, or pod anti-affinity rules. Node affinity allows a pod to specify an affinity towards a group of target nodes. Pod affinity and anti-affinity allows you to specify rules about how pods can or cannot be scheduled relative to other pods that are already running on a node. 4.12.3.1. Controlling pod placement using node affinity rules Node affinity allows a broker pod to specify an affinity towards a group of nodes that it can be placed on. A broker pod can be scheduled on any node that has a label with the same key-value pair as the affinity rule that you create for a pod. The following example shows how to configure a broker to control pod placement by using node affinity rules. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Assign a common label to the nodes in your OpenShift Container Platform cluster that can schedule the broker pod, for example, zone: emea . Procedure Create a Custom Resource (CR) instance based on the main broker CRD. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan section of the CR, add the following sections: affinity , nodeAffinity , requiredDuringSchedulingIgnoredDuringExecution , and nodeSelectorTerms . In the nodeSelectorTerms section, add the - matchExpressions parameter and specify the key-value string of a node label to match. For example: spec: deploymentPlan: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: zone operator: In values: - emea In this example, the affinity rule allows the pod to be scheduled on any node that has a label with a key of zone and a value of emea . Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For more information about affinity rules in OpenShift Container Platform, see Controlling pod placement on nodes using node affinity rules in the OpenShift Container Platform documentation. 4.12.3.2. Placing pods relative to other pods using anti-affinity rules Anti-affinity rules allow you to constrain which nodes the broker pods can be scheduled on based on the labels of pods already running on that node. A use case for using anti-affinity rules is to ensure that multiple broker pods in a cluster are not scheduled on the same node, which creates a single point of failure. If you do not control the placement of pods, 2 or more broker pods in a cluster can be scheduled on the same node. The following example shows how to configure anti-affinity rules to prevent 2 broker pods in a cluster from being scheduled on the same node. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Create a CR instance for the first broker in the cluster based on the main broker CRD. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the deploymentPlan section of the CR, add a labels section. Create an identifying label for the first broker pod so that you can create an anti-affinity rule on the second broker pod to prevent both pods from being scheduled on the same node. For example: spec: deploymentPlan: labels: name: broker1 Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Create a CR instance for the second broker in the cluster based on the main broker CRD. In the deploymentPlan section of the CR, add the following sections: affinity , podAntiAffinity , requiredDuringSchedulingIgnoredDuringExecution , and labelSelector . In the labelSelector section, add the - matchExpressions parameter and specify the key-value string of the broker pod label to match, so this pod is not scheduled on the same node. spec: deploymentPlan: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: labelSelector: - matchExpressions: - key: name operator: In values: - broker1 topologyKey: topology.kubernetes.io/zone In this example, the pod anti-affinity rule prevents the pod from being placed on the same node as a pod that has a label with a key of name and a value of broker1 , which is the label assigned to the first broker in the cluster. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For more information about affinity rules in OpenShift Container Platform, see Controlling pod placement on nodes using node affinity rules in the OpenShift Container Platform documentation.
[ "<address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match=\"activemq.management#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!-- default for catch all --> <address-setting match=\"#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <address-settings>", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: myAddressDeployment0 namespace: myProject spec: addressName: myAddress0 queueName: myQueue0 routingType: anycast", "oc project <project_name>", "oc create -f <path/to/address_custom_resource_instance> .yaml", "oc delete -f <path/to/address_custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: ex-aaoaddress spec: addressName: myDeadLetterAddress queueName: myDeadLetterQueue routingType: anycast", "oc project <project_name>", "oc create -f <path/to/address_custom_resource_instance> .yaml", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting:", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: applyRule: merge_all addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3", "oc create -f <path/to/broker_custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: \"prop-module\" users: - name: \"sam\" password: \"samspassword\" roles: - \"sender\" - name: \"rob\" password: \"robspassword\" roles: - \"receiver\" securityDomains: brokerDomain: name: \"activemq\" loginModules: - name: \"prop-module\" flag: \"sufficient\" securitySettings: broker: - match: \"#\" permissions: - operationType: \"send\" roles: - \"sender\" - operationType: \"createAddress\" roles: - \"sender\" - operationType: \"createDurableQueue\" roles: - \"sender\" - operationType: \"consume\" roles: - \"receiver\"", "oc project <project_name>", "oc create -f <path/to/address_custom_resource_instance> .yaml", "create secret generic security-properties-prop-module --from-literal=sam=samspassword --from-literal=rob=robspassword", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: \"prop-module\" users: - name: \"sam\" roles: - \"sender\" - name: \"rob\" roles: - \"receiver\" securityDomains: brokerDomain: name: \"activemq\" loginModules: - name: \"prop-module\" flag: \"sufficient\" securitySettings: broker: - match: \"#\" permissions: - operationType: \"send\" roles: - \"sender\" - operationType: \"createAddress\" roles: - \"sender\" - operationType: \"createDurableQueue\" roles: - \"sender\" - operationType: \"consume\" roles: - \"receiver\"", "oc project <project_name>", "oc create -f <path/to/address_custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi storageClassName: gp3", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true resources: limits: cpu: \"500m\" memory: \"1024M\" requests: cpu: \"250m\" memory: \"512M\"", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "spec: brokerProperties: - globalMaxSize=500m", "oc project <project_name>", "oc apply -f <path/to/broker_custom_resource_instance> .yaml", "FROM registry.redhat.io/amq7/amq-broker-init-rhel8:7.10", "login -u <user> -p <password> --server= <host:port>", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder initImage: requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 1 image: placeholder initImage: <custom_init_container_image_url> requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "spec: acceptors: - name: my-acceptor protocols: amqp port: 5672", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true", "CN=my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain", "CN=my-broker-deployment-*-svc-rte-my-openshift-project.my-openshift-domain", "\"SAN=DNS:my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain,DNS:my-broker-deployment-1-svc-rte-my-openshift-project.my-openshift-domain,...\"", "keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks", "keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem", "keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem", "oc login -u system:admin", "oc project <my_openshift_project>", "oc create secret generic my-tls-secret --from-file=broker.ks=~/broker.ks --from-file=client.ts=~/client.ks --from-literal=keyStorePassword= <password> --from-literal=trustStorePassword= <password>", "oc secrets link sa/amq-broker-operator secret/my-tls-secret", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5", "keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks", "keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem", "keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem", "keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks", "keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem", "keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem", "oc login -u system:admin", "oc project <my_openshift_project>", "oc create secret generic my-tls-secret --from-file=broker.ks=~/broker.ks --from-file=client.ts=~/broker.ts --from-literal=keyStorePassword= <password> --from-literal=trustStorePassword= <password>", "oc secrets link sa/amq-broker-operator secret/my-tls-secret", "spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5", "tcp://ex-aao-ss-0:<port>", "curl https://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain", "tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true &trustStorePath=~/client.ts&trustStorePassword= <password>", "tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true &keyStorePath=~/client.ks&keyStorePassword= <password> &trustStorePath=~/client.ts&trustStorePassword= <password>", "ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443\" Also, specify the following JVM flags -Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword= <password>", "ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443\" Also, specify the following JVM flags -Djavax.net.ssl.keyStore=~/client.ks -Djavax.net.ssl.keyStorePassword= <password> -Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword= <password>", "amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true &transport.trustStoreLocation=~/client.ts&transport.trustStorePassword= <password>", "amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true &transport.keyStoreLocation=~/client.ks&transport.keyStorePassword= <password> &transport.trustStoreLocation=~/client.ts&transport.trustStorePassword= <password>", "oc edit -f <path/to/custom_resource_instance> .yaml", "spec: acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true", "spec: acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true amqpMinLargeMessageSize: 204800", "login -u <user> -p <password> --server= <host:port>", "spec: deploymentPlan: livenessProbe: initialDelaySeconds: 5 periodSeconds: 5", "spec: deploymentPlan: livenessProbe: tcpSocket: port: 8181 initialDelaySeconds: 30 timeoutSeconds: 5", "spec: deploymentPlan: readinessProbe: initialDelaySeconds: 5 periodSeconds: 5", "spec: deploymentPlan: readinessProbe: exec: command: - bash - '-c' - /home/jboss/amq-broker/bin/artemis - check - node - '--silent' - '--acceptor' - < acceptor name > - '--user' - USDAMQ_USER - '--password' - USDAMQ_PASSWORD initialDelaySeconds: 30 timeoutSeconds: 5", "spec: deploymentPlan: readinessProbe: exec: command: - bash - '-c' - /home/jboss/amq-broker/bin/artemis - check - queue - '--name' - livenessqueue - '--produce' - \"1\" - '--consume' - \"1\" - '--silent' - '--user' - USDAMQ_USER - '--password' - USDAMQ_PASSWORD initialDelaySeconds: 30 timeoutSeconds: 5", "apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemisAddress metadata: name: livenessqueue namespace: activemq-artemis-operator spec: addressName: livenessqueue queueConfiguration: purgeOnNoConsumers: false maxConsumers: -1 durable: true enabled: true queueName: livenessqueue routingType: anycast", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "oc get pods", "activemq-artemis-operator-8566d9bf58-9g25l 1/1 Running 0 3m38s ex-aao-ss-0 1/1 Running 0 112s ex-aao-ss-1 1/1 Running 0 8s", "/opt/amq/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin", "/opt/amq/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin", "oc apply -f deploy/crs/broker_activemqartemis_cr.yaml", "login -u <user> -p <password> --server= <host:port>", "spec: deploymentPlan: nodeSelector: app: broker1", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "spec: deploymentPlan: tolerations: - key: \"app\" value: \"amq-broker\" effect: \"NoSchedule\"", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "spec: deploymentPlan: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: zone operator: In values: - emea", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "login -u <user> -p <password> --server= <host:port>", "spec: deploymentPlan: labels: name: broker1", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml", "spec: deploymentPlan: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: labelSelector: - matchExpressions: - key: name operator: In values: - broker1 topologyKey: topology.kubernetes.io/zone", "oc project <project_name>", "oc create -f <path/to/custom_resource_instance> .yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/assembly-br-configuring-operator-based-deployments_broker-ocp
Chapter 1. Introduction to Service Telemetry Framework 1.5
Chapter 1. Introduction to Service Telemetry Framework 1.5 Service Telemetry Framework (STF) collects monitoring data from Red Hat OpenStack Platform (RHOSP) or third-party nodes. You can use STF to perform the following tasks: Store or archive the monitoring data for historical information. View the monitoring data graphically on the dashboard. Use the monitoring data to trigger alerts or warnings. The monitoring data can be either metric or event: Metric A numeric measurement of an application or system. Event Irregular and discrete occurrences that happen in a system. The components of STF use a message bus for data transport. Other modular components that receive and store data are deployed as containers on Red Hat OpenShift Container Platform. Important STF is compatible with Red Hat OpenShift Container Platform Extended Update Support (EUS) release versions 4.14 and 4.16. Additional resources Red Hat OpenShift Container Platform product documentation Service Telemetry Framework Performance and Scaling OpenShift Container Platform 4.16 Documentation Red Hat OpenShift Container Platform Life Cycle Policy 1.1. Support for Service Telemetry Framework Red Hat supports the core Operators and workloads, including AMQ Interconnect, Cluster Observability Operator (Prometheus, Alertmanager), Service Telemetry Operator, and Smart Gateway Operator. Red Hat does not support the community Operators or workload components, inclusive of Elasticsearch, Grafana, and their Operators. You can deploy Service Telemetry Framework (STF) in fully connected network environments or in Red Hat OpenShift Container Platform-disconnected environments. You cannot deploy STF in network proxy environments. For more information about STF life cycle and support status, see the Service Telemetry Framework Supported Version Matrix . 1.2. Service Telemetry Framework architecture Service Telemetry Framework (STF) uses a client-server architecture, in which Red Hat OpenStack Platform (RHOSP) is the client and Red Hat OpenShift Container Platform is the server. By default, STF collects, transports, and stores metrics information. You can collect RHOSP events data, transport it with the message bus, and forward it to a user-provided Elasticsearch from the Smart Gateways, but this option is deprecated. STF consists of the following components: Data collection collectd: Collects infrastructure metrics and events on RHOSP. Ceilometer: Collects RHOSP metrics and events on RHOSP. Transport AMQ Interconnect: An AMQP 1.x compatible messaging bus that provides fast and reliable data transport to transfer the metrics from RHOSP to STF for storage or forwarding. Smart Gateway: A Golang application that takes metrics and events from the AMQP 1.x bus to deliver to Prometheus or an external Elasticsearch. Data storage Prometheus: Time-series data storage that stores STF metrics received from the Smart Gateway. Alertmanager: An alerting tool that uses Prometheus alert rules to manage alerts. User provided components Grafana: A visualization and analytics application that you can use to query, visualize, and explore data. Elasticsearch: Events data storage that stores RHOSP events received and forwarded by the Smart Gateway. The following table describes the application of the client and server components: Table 1.1. Client and server components of STF Component Client Server An AMQP 1.x compatible messaging bus yes yes Smart Gateway no yes Prometheus no yes Elasticsearch no yes Grafana no yes collectd yes no Ceilometer yes no Important To ensure that the monitoring platform can report operational problems with your cloud, do not install STF on the same infrastructure that you are monitoring. Figure 1.1. Service Telemetry Framework architecture overview For client side metrics, collectd provides infrastructure metrics without project data, and Ceilometer provides RHOSP platform data based on projects or user workload. Both Ceilometer and collectd deliver data to Prometheus by using the AMQ Interconnect transport, delivering the data through the message bus. On the server side, a Golang application called the Smart Gateway takes the data stream from the bus and exposes it as a local scrape endpoint for Prometheus. When you collect and store events, collectd and Ceilometer deliver event data to the server side by using the AMQ Interconnect transport. Another Smart Gateway forwards the data to a user-provided Elasticsearch datastore. Server-side STF monitoring infrastructure consists of the following layers: Service Telemetry Framework 1.5 Red Hat OpenShift Container Platform Extended Update Support (EUS) releases 4.14 and 4.16 Infrastructure platform For more information about the Red Hat OpenShift Container Platform EUS releases, see Red Hat OpenShift Container Platform Life Cycle Policy . Figure 1.2. Server-side STF monitoring infrastructure 1.2.1. STF Architecture Changes In releases of STF prior to 1.5.3, the Service Telemetry Operator requested instances of Elasticsearch from the Elastic Cloud on Kubernetes (ECK) Operator. STF now uses a forwarding model, where events are forwarded from a Smart Gateway instance to a user-provided instance of Elasticsearch. Note The management of an Elasticsearch instances by Service Telemetry Operator is deprecated. In new ServiceTelemetry deployments, the observabilityStrategy parameter has a value of use_redhat , that does not request Elasticsearch instances from ECK. Deployments of ServiceTelemetry with STF version 1.5.2 or older and were updated to 1.5.3 will have the observabilityStrategy parameter set to use_community , which matches the architecture. If a user previously deployed an Elasticsearch instance with STF, the Service Telemetry Operator updates the ServiceTelemetry custom resource object to have the observabilityStrategy parameter set to use_community , and functions similar to releases. For more information about observability strategies, see Section 2.1, "Observability Strategy in Service Telemetry Framework" . It is recommended that users of STF migrate to the use_redhat observability strategy. For more information about migration to the use_redhat observability strategy, see the Red Hat Knowledge Base article Migrating Service Telemetry Framework to fully supported operators . 1.3. Installation size of Red Hat OpenShift Container Platform The size of your Red Hat OpenShift Container Platform installation depends on the following factors: The infrastructure that you select. The number of nodes that you want to monitor. The number of metrics that you want to collect. The resolution of metrics. The length of time that you want to store the data. Installation of Service Telemetry Framework (STF) depends on an existing Red Hat OpenShift Container Platform environment. For more information about minimum resources requirements when you install Red Hat OpenShift Container Platform on baremetal, see Minimum resource requirements in the Installing a cluster on bare metal guide. For installation requirements of the various public and private cloud platforms that you can install, see the corresponding installation documentation for your cloud platform of choice.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-introduction-to-stf_assembly
B.21.2. RHSA-2010:0966 - Critical: firefox security update
B.21.2. RHSA-2010:0966 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. CVE-2010-3766 , CVE-2010-3767 , CVE-2010-3772 , CVE-2010-3776 , CVE-2010-3777 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2010-3771 A flaw was found in the way Firefox handled malformed JavaScript. A website with an object containing malicious JavaScript could cause Firefox to execute that JavaScript with the privileges of the user running Firefox. CVE-2010-3768 This update adds support for the Sanitiser for OpenType (OTS) library to Firefox. This library helps prevent potential exploits in malformed OpenType fonts by verifying the font file prior to use. CVE-2010-3775 A flaw was found in the way Firefox loaded Java LiveConnect scripts. Malicious web content could load a Java LiveConnect script in a way that would result in the plug-in object having elevated privileges, allowing it to execute Java code with the privileges of the user running Firefox. CVE-2010-3773 It was found that the fix for CVE-2010-0179 was incomplete when the Firebug add-on was used. If a user visited a website containing malicious JavaScript while the Firebug add-on was enabled, it could cause Firefox to execute arbitrary JavaScript with the privileges of the user running Firefox. CVE-2010-3774 A flaw was found in the way Firefox presented the location bar to users. A malicious website could trick a user into thinking they are visiting the site reported by the location bar, when the page is actually content controlled by an attacker. CVE-2010-3770 A cross-site scripting (XSS) flaw was found in the Firefox x-mac-arabic, x-mac-farsi, and x-mac-hebrew character encodings. Certain characters were converted to angle brackets when displayed. If server-side script filtering missed these cases, it could result in Firefox executing JavaScript code with the permissions of a different website. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 3.6.13: http://www.mozilla.org/security/known-vulnerabilities/firefox36.html#firefox3.6.13 All Firefox users should upgrade to these updated packages, which contain Firefox version 3.6.13, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2010-0966
Chapter 19. Tracking the last login time without setting a lockout policy
Chapter 19. Tracking the last login time without setting a lockout policy You can use the Account Policy plug-in to track user login times without setting an expiration time or inactivity period. In this case, the plug-in adds the lastLoginTime attribute to user entries. 19.1. Configuring the Account Policy plug-in to record the last login time Follow this procedure to record the last login time of users in the lastLoginTime attribute of user entries. Procedure Enable the Account Policy plug-in: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy enable Create the plug-in configuration entry to record login times: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy config-entry set " cn=config,cn=Account Policy Plugin,cn=plugins,cn=config " --always-record-login yes --state-attr lastLoginTime This command uses the following options: --always-record-login yes : Enables logging of the log in time. --state-attr lastLoginTime : Configures that the Account Policy plug-in stores the last log in time in the lastLoginTime attribute of users. Restart the instance: # dsctl instance_name restart Verification Log in to Directory Server as a user. For example, run a search: # ldapsearch -H ldap://server.example.com -x -D " uid=example,ou=People,dc=example,dc=com " -W -b " dc=example,dc=com " Display the lastLoginTime attribute of the user you used in the step: # ldapsearch -H ldap://server.example.com -x -D " cn=Directory Manager " -W -b " uid=example,ou=people,dc=example,dc=com " lastLoginTime ... dn: uid=example,ou=People,dc=example,dc=com lastLoginTime: 20210913091435Z If the lastLoginTime attribute exists and Directory Server updated its value, recording of the last login time works.
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy enable", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy config-entry set \" cn=config,cn=Account Policy Plugin,cn=plugins,cn=config \" --always-record-login yes --state-attr lastLoginTime", "dsctl instance_name restart", "ldapsearch -H ldap://server.example.com -x -D \" uid=example,ou=People,dc=example,dc=com \" -W -b \" dc=example,dc=com \"", "ldapsearch -H ldap://server.example.com -x -D \" cn=Directory Manager \" -W -b \" uid=example,ou=people,dc=example,dc=com \" lastLoginTime dn: uid=example,ou=People,dc=example,dc=com lastLoginTime: 20210913091435Z" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_tracking-the-last-login-time-without-setting-a-lockout-policy_securing-rhds
Windows Container Support for OpenShift
Windows Container Support for OpenShift OpenShift Container Platform 4.9 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team
[ "Path : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt Owner : BUILTIN\\Administrators Group : NT AUTHORITY\\SYSTEM Access : NT AUTHORITY\\SYSTEM Allow FullControl BUILTIN\\Administrators Allow FullControl BUILTIN\\Users Allow ReadAndExecute, Synchronize Audit : Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f wmco-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator", "oc create -f <file-name>.yaml", "oc create -f wmco-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4", "oc create -f <file-name>.yaml", "oc create -f wmco-sub.yaml", "oc get csv -n openshift-windows-machine-config-operator", "NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded", "oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1", "aws ec2 describe-images --region <aws region name> --filters \"Name=name,Values=Windows_Server-2019*English*Full*Containers*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: <runtime_class_name> 1 handler: 'docker' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"Windows\"", "oc create -f <file-name>.yaml", "oc create -f runtime-class.yaml", "apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: <runtime_class_name> 1", "apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer", "apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: tolerations: - key: \"os\" value: \"Windows\" Effect: \"NoSchedule\" containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 imagePullPolicy: IfNotPresent command: - powershell.exe - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" nodeSelector: kubernetes.io/os: windows", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "oc delete --all pods --namespace=openshift-windows-machine-config-operator", "oc get pods --namespace openshift-windows-machine-config-operator", "oc delete namespace openshift-windows-machine-config-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/windows_container_support_for_openshift/index
Appendix B. Topic configuration parameters
Appendix B. Topic configuration parameters cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Server Default Property: log.cleanup.policy Importance: medium A string that is either "delete" or "compact" or both. This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable log compaction on the topic. compression.type Type: string Default: producer Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer] Server Default Property: compression.type Importance: medium Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. delete.retention.ms Type: long Default: 86400000 (1 day) Valid Values: [0,... ] Server Default Property: log.cleaner.delete.retention.ms Importance: medium The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). file.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Server Default Property: log.segment.delete.delay.ms Importance: medium The time to wait before deleting a file from the filesystem. flush.messages Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.messages Importance: medium This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section ). flush.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.ms Importance: medium This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. follower.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: follower.replication.throttled.replicas Importance: medium A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Server Default Property: log.index.interval.bytes Importance: medium This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this. leader.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: leader.replication.throttled.replicas Importance: medium A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. max.compaction.lag.ms Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.cleaner.max.compaction.lag.ms Importance: medium The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. max.message.bytes Type: int Default: 1048588 Valid Values: [0,... ] Server Default Property: message.max.bytes Importance: medium The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case. message.format.version Type: string Default: 2.8-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1] Server Default Property: log.message.format.version Importance: medium Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.difference.max.ms Importance: medium The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Server Default Property: log.message.timestamp.type Importance: medium Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . min.cleanable.dirty.ratio Type: double Default: 0.5 Valid Values: [0,... ,1] Server Default Property: log.cleaner.min.cleanable.ratio Importance: medium This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period. min.compaction.lag.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.cleaner.min.compaction.lag.ms Importance: medium The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Server Default Property: min.insync.replicas Importance: medium When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. preallocate Type: boolean Default: false Server Default Property: log.preallocate Importance: medium True if we should preallocate the file on disk when creating a new log segment. retention.bytes Type: long Default: -1 Server Default Property: log.retention.bytes Importance: medium This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. retention.ms Type: long Default: 604800000 (7 days) Valid Values: [-1,... ] Server Default Property: log.retention.ms Importance: medium This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied. segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Server Default Property: log.segment.bytes Importance: medium This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. segment.index.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Server Default Property: log.index.size.max.bytes Importance: medium This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting. segment.jitter.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.roll.jitter.ms Importance: medium The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling. segment.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Server Default Property: log.roll.ms Importance: medium This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. unclean.leader.election.enable Type: boolean Default: false Server Default Property: unclean.leader.election.enable Importance: medium Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. message.downconversion.enable Type: boolean Default: true Server Default Property: log.message.downconversion.enable Importance: low This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/topic-configuration-parameters-str
Builds using Shipwright
Builds using Shipwright OpenShift Container Platform 4.17 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/builds_using_shipwright/index
3.5. Storage
3.5. Storage Storage for virtual machines is abstracted from the physical storage used by the virtual machine. It is attached to the virtual machine using the paravirtualized or emulated block device drivers. 3.5.1. Storage Pools A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. Refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide for more information. Local storage pools Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environments, because they do not support live migration. Networked (shared) storage pools Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between hosts with virt-manager , but is optional when migrating with virsh . Networked storage pools are managed by libvirt .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-storage
6.12. Desktop
6.12. Desktop python component, BZ# 1114434 In a multi-thread Python program, if a non-main thread receives a signal while the signal.pause() function is in use in the main thread, signal.pause() does not return or otherwise handle the received signal, and signal.pause() works only when the main thread is signaled. As a consequence, a Python program could become unresponsive. To work around this problem, avoid calling signal.pause() in the main thread. mesa-private-llvm component, BZ# 1121576 The mesa-private-llvm packages have a syntax error in their %postun script in versions prior to 3.4. As a consequence, when updating mesa-private-llvm to a later version, the following error message is displayed: This message is harmless and does not affect the user. xorg-x11-drv-fbdev component, BZ# 1011657 The X server in Red Hat Enterprise Linux 6.5, when presented with an xorg.conf file that identifies both a PCI driver and an fbdev driver, the fbdev driver is ignored and only the PCI device is initialized. In Red Hat Enterprise Linux 6.6, the server can attempt to initialize both devices present in such a configuration file. As a consequence, installations in this scenario initialize on all screens, which can cause a loss of functionality. To work around this problem, edit xorg.conf manually: remove the fbdev device stanza or edit it appropriately. As a result, a single X server can now drive both PCI devices using native drivers and non-PCI devices with the fbdev driver. gnome-panel component, BZ# 1017631 The gnome-panel utility can sometimes terminate unexpectedly on 64-bit PowerPC architecture using the XDMCP protocol. xorg-x11-drv-intel component, BZ# 889574 Red Hat Enterprise Linux 6 graphics stacs does not support NVIDIA Optimus hardware configurations. On laptops with both Intel and NVIDIA GPUs, some or all external video ports may not function correctly when using the Intel GPU. If external video ports are needed, configure the BIOS to use the NVIDIA GPU instead of the Intel GPU if possible. xorg-x11-drv-synaptics component, BZ# 873721 Two-finger scrolling is default for devices that announce two-finger capability. However, on certain machines, although the touchpad announces two-finger capability, events generated by the device only contain a single finger position at a time and two-finger scrolling therefore does not work. To work around this problem, use edge scrolling instead. firefox component In certain environments, storing personal Firefox configuration files (~/.mozilla/) on an NFS share, such as when your home directory is on a NFS share, led to Firefox functioning incorrectly, for example, navigation buttons not working as expected, and bookmarks not saving. This update adds a new configuration option, storage.nfs_filesystem, that can be used to resolve this issue. If you experience this issue: Start Firefox . Type about:config into the URL bar and press the Enter key. If prompted with "This might void your warranty!", click the I'll be careful, I promise! button. Right-click in the Preference Name list. In the menu that opens, select New Boolean . Type "storage.nfs_filesystem" (without quotes) for the preference name and then click the OK button. Select true for the boolean value and then press the OK button. wacomcpl component, BZ# 769466 The wacomcpl package has been deprecated and has been removed from the package set. The wacomcpl package provided graphical configuration of Wacom tablet settings. This functionality is now integrated into the GNOME Control Center. acroread component Running a AMD64 system without the sssd-client.i686 package installed, which uses SSSD for getting information about users, causes acroread to fail to start. To work around this issue, manually install the sssd-client.i686 package. kernel component, BZ# 681257 With newer kernels, such as the kernel shipped in Red Hat Enterprise Linux 6.1, Nouveau has corrected the Transition Minimized Differential Signaling (TMDS) bandwidth limits for pre-G80 NVIDIA chipsets. Consequently, the resolution auto-detected by X for some monitors may differ from that used in Red Hat Enterprise Linux 6.0. fprintd component When enabled, fingerprint authentication is the default authentication method to unlock a workstation, even if the fingerprint reader device is not accessible. However, after a 30 second wait, password authentication will become available. evolution component Evolution's IMAP backend only refreshes folder contents under the following circumstances: when the user switches into or out of a folder, when the auto-refresh period expires, or when the user manually refreshes a folder (that is, using the menu item Folder Refresh ). Consequently, when replying to a message in the Sent folder, the new message does not immediately appear in the Sent folder. To see the message, force a refresh using one of the methods describe above. anaconda component The clock applet in the GNOME panel has a default location of Boston, USA. Additional locations are added via the applet's preferences dialog. Additionally, to change the default location, left-click the applet, hover over the desired location in the Locations section, and click the Set... button that appears. xorg-x11-server component, BZ# 623169 In some multi-monitor configurations (for example, dual monitors with both rotated), the cursor confinement code produces incorrect results. For example, the cursor may be permitted to disappear off the screen when it should not, or be prevented from entering some areas where it should be allowed to go. Currently, the only workaround for this issue is to disable monitor rotation.
[ "Upgrading from mesa-private-llvm-3.3-0.3.rc3.el6.x86_64 causes: /sbin/ldconfig: relative path `1' used to build cache warning: %postun(mesa-private-llvm-3.3-0.3.rc3.el6.x86_64) scriptlet failed, exit status 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/desktop_issues
Chapter 1. Limiting access to cost management resources
Chapter 1. Limiting access to cost management resources You may not want users to have access to all cost data, but instead only data specific to their projects or organization. Using role-based access control, you can limit the visibility of resources involved in cost management reports. For example, you may want to restrict a user's view to only AWS integrations, rather than the entire environment. Role-based access control works by organizing users into groups, which can be associated with one or more roles. A role defines a permission and a set of resource definitions. By default, a user who is not an administrator or viewer will not have access to data, but instead must be granted access to resources. Account administrators can view all data without any further role-based access control configuration. Note A Red Hat account user with Organization Administrator entitlements is required to configure account users on Red Hat Hybrid Cloud Console . This Red Hat login allows you to look up users, add them to groups, and to assign roles that control visibility to resources. For more information about Red Hat account roles, see User Access Configuration Guide For Role-Based Access Control (RBAC) in the Red Hat Hybrid Cloud Console documentation.. 1.1. Default user roles in cost management You can configure custom user access roles for cost management, or assign each user a predefined role within the Red Hat Hybrid Cloud Console . To use a default role, determine the required level of access to permit your users based on the following predefined cost management related roles: Administrator roles Organization Administrator : Can configure and manage user access and is the only user with access to cost management settings . User Access Administrator : Can configure and manage user access to services hosted on Red Hat Hybrid Cloud Console . Cloud Administrator : Can perform any available operation on any integration. Cost Administrator : Can read and write to all resources in cost management. Cost Price List Administrator : Can read and write on all cost models. Viewer roles Cost Cloud Viewer : Has read permissions on cost reports related to cloud integrations. Cost OpenShift Viewer : Has read permissions on cost reports related to OpenShift integrations. Cost Price List Viewer : Has read permissions on price list rates. In addition to using these predefined roles, you can create and manage custom User Access roles with granular permissions for one or more applications in Red Hat Hybrid Cloud Console . For more information, see Adding custom User Access roles in the Red Hat Hybrid Cloud Console documentation. 1.2. Adding a role to a group Once you have decided the correct roles for your organization, you must add your role to a group to manage and limit the scope of information that members in that group can see within cost management. The Member tab shows all users that you can add to the group. When you add users to a group, they become members of that group. A group member inherits the roles of all other groups they belong to. Prerequisites You must be an Organization Administrator. If you are not an Organization Administrator, you must be a member of a group that has the User Access Administrator role assigned to it. Note Only the Organization Administrator can assign the User Access Administrator role to a group. Procedure Log in to your Red Hat organization account at Red Hat Hybrid Cloud Console . Click Settings > Identity & Access Management to open the Red Hat Hybrid Cloud Console Settings page. In the Global navigation, click the User Access Groups . Click Create group . Follow the guided actions provided by the wizard to add a group name, roles, and members. To grant additional group access, edit the group and add additional roles. Your new group is listed in the Groups list on the User Access screen. Verification To verify your configuration, log out of the cost management application and log back in as a user added to the group. For more information about configuring Red Hat account roles and groups, see User Access Configuration Guide For Role-Based Access Control (RBAC) in the Red Hat Hybrid Cloud Console documentation.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/limiting_access_to_cost_management_resources/assembly-limiting-access-cost-resources-rbac
Chapter 4. Initial Load Balancer Configuration with Keepalived
Chapter 4. Initial Load Balancer Configuration with Keepalived After installing Load Balancer packages, you must take some basic steps to set up the LVS router and the real servers for use with Keepalived. This chapter covers these initial steps in detail. 4.1. A Basic Keepalived configuration In this basic example, two systems are configured as load balancers. LB1 (Active) and LB2 (Backup) will be routing requests for a pool of four Web servers running httpd with real IP addresses numbered 192.168.1.20 to 192.168.1.24, sharing a virtual IP address of 10.0.0.1. Each load balancer has two interfaces ( eth0 and eth1 ), one for handling external Internet traffic, and the other for routing requests to the real servers. The load balancing algorithm used is Round Robin and the routing method will be Network Address Translation. 4.1.1. Creating the keapalived.conf file Keepalived is configured by means of the keepalived.conf file in each system configured as a load balancer. To create a load balancer topology like the example shown in Section 4.1, "A Basic Keepalived configuration" , use a text editor to open keepalived.conf in both the active and backup load balancers, LB1 and LB2. For example: A basic load balanced system with the configuration as detailed in Section 4.1, "A Basic Keepalived configuration" has a keepalived.conf file as explained in the following code sections. In this example, the keepalived.conf file is the same on both the active and backup routers with the exception of the VRRP instance, as noted in Section 4.1.1.2, "VRRP Instance" 4.1.1.1. Global Definitions The Global Definitions section of the keepalived.conf file allows administrators to specify notification details when changes to the load balancer occurs. Note that the Global Definitions are optional and are not required for Keepalived configuration. This section of the keepalived.conf file is the same on both LB1 and LB2. The notification_email is the administrator of the load balancer, while the notification_email_from is an address that sends the load balancer state changes. The SMTP specific configuration specifies the mail server from which the notifications are mailed. 4.1.1.2. VRRP Instance The following examples show the vrrp_sync_group stanza of the keeplalived.conf file in the master router and the backup router. Note that the state and priority values differ between the two systems. The following example shows the vrrp_sync_group stanza for the keepalived.conf file in LB1, the master router. The following example shows the vrrp_sync_group stanza of the keepalived.conf file for LB2, the backup router. In these example, the vrrp_sync_group stanza defines the VRRP group that stays together through any state changes (such as failover). There is an instance defined for the external interface that communicates with the Internet (RH_EXT), as well as one for the internal interface (RH_INT). The vrrp_instance line details the virtual interface configuration for the VRRP service daemon, which creates virtual IP instances. The state MASTER designates the active server, the state BACKUP designates the backup server. The interface parameter assigns the physical interface name to this particular virtual IP instance. virtual_router_id is a numerical identifier for the Virtual Router instance. It must be the same on all LVS Router systems participating in this Virtual Router. It is used to differentiate multiple instances of keepalived running on the same network interface. The priority specifies the order in which the assigned interface takes over in a failover; the higher the number, the higher the priority. This priority value must be within the range of 0 to 255, and the Load Balancing server configured as state MASTER should have a priority value set to a higher number than the priority value of the server configured as state BACKUP . The authentication block specifies the authentication type ( auth_type ) and password ( auth_pass ) used to authenticate servers for failover synchronization. PASS specifies password authentication; Keepalived also supports AH , or Authentication Headers for connection integrity. Finally, the virtual_ipaddress option specifies the interface virtual IP address. 4.1.1.3. Virtual Server Definitions The Virtual Server definitions section of the keepalived.conf file is the same on both LB1 and LB2. In this block, the virtual_server is configured first with the IP address. Then a delay_loop configures the amount of time (in seconds) between health checks. The lb_algo option specifies the kind of algorithm used for availability (in this case, rr for Round-Robin; for a list of possible lb_algo values see Table 4.1, "lv_algo Values for Virtual Server" ). The lb_kind option determines routing method, which in this case Network Address Translation (or nat ) is used. After configuring the Virtual Server details, the real_server options are configured, again by specifying the IP Address first. The TCP_CHECK stanza checks for availability of the real server using TCP. The connect_timeout configures the time in seconds before a timeout occurs. Note Accessing the virtual IP from the load balancers or one of the real servers is not supported. Likewise, configuring a load balancer on the same machines as a real server is not supported. Table 4.1. lv_algo Values for Virtual Server Algorithm Name lv_algo value Round-Robin rr Weighted Round-Robin wrr Least-Connection lc Weighted Least-Connection wlc Locality-Based Least-Connection lblc Locality-Based Least-Connection Scheduling with Replication lblcr Destination Hash dh Source Hash sh Source Expected Delay sed Never Queue nq
[ "vi /etc/keepalived/keepalived.conf", "global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 }", "vrrp_sync_group VG1 { group { RH_EXT RH_INT } } vrrp_instance RH_EXT { state MASTER interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 10.0.0.1 } } vrrp_instance RH_INT { state MASTER interface eth1 virtual_router_id 2 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 192.168.1.1 } }", "vrrp_sync_group VG1 { group { RH_EXT RH_INT } } vrrp_instance RH_EXT { state BACKUP interface eth0 virtual_router_id 50 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 10.0.0.1 } } vrrp_instance RH_INT { state BACKUP interface eth1 virtual_router_id 2 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 192.168.1.1 } }", "virtual_server 10.0.0.1 80 { delay_loop 6 lb_algo rr lb_kind NAT protocol TCP real_server 192.168.1.20 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.21 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.22 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.23 80 { TCP_CHECK { connect_timeout 10 } } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ch-initial-setup-vsa
Chapter 11. Editing applications
Chapter 11. Editing applications You can edit the configuration and the source code of the application you create using the Topology view. 11.1. Prerequisites You have created and deployed an application on OpenShift Dedicated using the Developer perspective . You have logged in to the web console and have switched to the Developer perspective. 11.2. Editing the source code of an application using the Developer perspective You can use the Topology view in the Developer perspective to edit the source code of your application. Procedure In the Topology view, click the Edit Source code icon, displayed at the bottom-right of the deployed application, to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. 11.3. Editing the application configuration using the Developer perspective You can use the Topology view in the Developer perspective to edit the configuration of your application. Note Currently, only configurations of applications created by using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow of the Developer perspective can be edited. Configurations of applications created by using the CLI or the YAML option from the Add workflow cannot be edited. Prerequisites Ensure that you have created an application using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow. Procedure After you have created an application and it is displayed in the Topology view, right-click the application to see the edit options available. Figure 11.1. Edit application Click Edit application-name to see the Add workflow you used to create the application. The form is pre-populated with the values you had added while creating the application. Edit the necessary values for the application. Note You cannot edit the Name field in the General section, the CI/CD pipelines, or the Create a route to the application field in the Advanced Options section. Click Save to restart the build and deploy a new image. Figure 11.2. Edit and redeploy application
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/odc-editing-applications
Chapter 14. Starting and stopping a Directory Server instance
Chapter 14. Starting and stopping a Directory Server instance You can start, stop, and restart a Directory Server instance by using the command line or the web console. 14.1. Starting and stopping a Directory Server instance by using the command line Use the dsctl utility to start, stop, or restart a Directory Server instance. Important The dsctl utility is the only correct way to stop the Directory Server instances. Do not use the kill command to terminate the ns-slapd process to avoid any data loss and corruption. Procedure To start the instance, run: To stop the instance, run: To restart the instance, run: Optionally, you can enable Directory Server instances to automatically start when the system boots: For a single instance, run: For all instances on a server, run: Verification You can check the instance status by using the dsctl or systemctl utility: To view the instance status by using the dsctl utility, run: To view the instance status by using the systemctl utility, run: Additional resources Managing system services with systemctl 14.2. Starting and stopping a Directory Server instance by using the web console You can use the web console to start, stop, or restart a Directory Server instance. Prerequisites You are logged in to the web console. For more details, see Logging in to the Directory Server by using the web console Procedure Select the Directory Server instance. Click the Actions button and select the action to execute: Start Instance Stop Instance Restart Instance Verification Ensure that the Directory Server instance is running. When the instance is not running, the web console displays the following message:
[ "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name restart", "systemctl enable dirsrv@ instance_name", "systemctl enable dirsrv.target", "dsctl instance_name status", "systemctl status dirsrv@ instance_name", "This server instance is not running, either start it from the Actions dropdown menu, or choose a different instance." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_starting-and-stopping-instance_installing-rhds
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. For submitting your feedback, create a Bugzilla ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/upgrading_from_rhel_6_to_rhel_7/proc_providing-feedback-on-red-hat-documentation_upgrading-from-rhel-6-to-rhel-7
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/authorization_services_guide/making-open-source-more-inclusive
Chapter 5. Profiles
Chapter 5. Profiles There are features in Red Hat Single Sign-On that are not enabled by default, these include features that are not fully supported. In addition there are some features that are enabled by default, but that can be disabled. The features that can be enabled and disabled are: Name Description Enabled by default Support level account2 New Account Management Console No Preview account_api Account Management REST API No Preview admin_fine_grained_authz Fine-Grained Admin Permissions No Preview docker Docker Registry protocol No Supported impersonation Ability for admins to impersonate users Yes Supported openshift_integration Extension to enable securing OpenShift No Preview scripts Write custom authenticators using JavaScript No Preview token_exchange Token Exchange Service No Preview upload_scripts Upload scripts through the Red Hat Single Sign-On REST API No Deprecated web_authn W3C Web Authentication (WebAuthn) No Preview To enable all preview features start the server with: You can set this permanently by creating the file standalone/configuration/profile.properties (or domain/servers/server-one/configuration/profile.properties for server-one in domain mode). Add the following to the file: To enable a specific feature start the server with: For example to enable Docker use -Dkeycloak.profile.feature.docker=enabled . You can set this permanently in the profile.properties file by adding: To disable a specific feature start the server with: For example to disable Impersonation use -Dkeycloak.profile.feature.impersonation=disabled . You can set this permanently in the profile.properties file by adding:
[ "bin/standalone.sh|bat -Dkeycloak.profile=preview", "profile=preview", "bin/standalone.sh|bat -Dkeycloak.profile.feature.<feature name>=enabled", "feature.docker=enabled", "bin/standalone.sh|bat -Dkeycloak.profile.feature.<feature name>=disabled", "feature.impersonation=disabled" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/profiles
Server Guide
Server Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/index
Chapter 5. Preparing network-based repositories
Chapter 5. Preparing network-based repositories You must prepare repositories to install RHEL from your network system. 5.1. Ports for network-based installation The following table lists the ports that must be open on the server for providing the files for each type of network-based installation. Table 5.1. Ports for network-based installation Protocol used Ports to open HTTP 80 HTTPS 443 FTP 21 NFS 2049, 111, 20048 TFTP 69 Additional resources Securing networks 5.2. Creating an installation source on an NFS server You can use this installation method to install multiple systems from a single source, without having to connect to physical media. Prerequisites You have an administrator-level access to a server with Red Hat Enterprise Linux 8, and this server is on the same network as the system to be installed. You have downloaded the full installation DVD ISO from the Product Downloads page. You have created a bootable CD, DVD, or USB device from the image file. You have verified that your firewall allows the system you are installing to access the remote installation source. For more information, see Ports for network-based installation . Important Ensure that you use different paths in inst.ks and inst.repo . When using NFS to host the installation source, you cannot use the same nfs share to host the Kickstart. Procedure Install the nfs-utils package: Copy the DVD ISO image to a directory on the NFS server. Open the /etc/exports file using a text editor and add a line with the following syntax: Replace /exported_directory/ with the full path to the directory with the ISO image. Replace clients with one of the following: The host name or IP address of the target system The subnetwork that all target systems can use to access the ISO image To allow any system with network access to the NFS server to use the ISO image, the asterisk sign ( * ) See the exports(5) man page for detailed information about the format of this field. For example, a basic configuration that makes the /rhel8-install/ directory available as read-only to all clients is: Save the /etc/exports file and exit the text editor. Start the nfs service: If the service was running before you changed the /etc/exports file, reload the NFS server configuration: The ISO image is now accessible over NFS and ready to be used as an installation source. When configuring the installation source, use nfs: as the protocol, the server host name or IP address, the colon sign (:) , and the directory holding the ISO image. For example, if the server host name is myserver.example.com and you have saved the ISO image in /rhel8-install/ , specify nfs:myserver.example.com:/rhel8-install/ as the installation source. 5.3. Creating an installation source using HTTP or HTTPS You can create an installation source for a network-based installation using an installation tree, which is a directory containing extracted contents of the DVD ISO image and a valid .treeinfo file. The installation source is accessed over HTTP or HTTPS. Prerequisites You have an administrator-level access to a server with Red Hat Enterprise Linux 8, and this server is on the same network as the system to be installed. You have downloaded the full installation DVD ISO from the Product Downloads page. You have created a bootable CD, DVD, or USB device from the image file. You have verified that your firewall allows the system you are installing to access the remote installation source. For more information, see Ports for network-based installation . The httpd package is installed. The mod_ssl package is installed, if you use the https installation source. Warning If your Apache web server configuration enables SSL security, prefer to enable the TLSv1.3 protocol. By default, TLSv1.2 is enabled and you may use the TLSv1 (LEGACY) protocol. Important If you use an HTTPS server with a self-signed certificate, you must boot the installation program with the noverifyssl option. Procedure Copy the DVD ISO image to the HTTP(S) server. Create a suitable directory for mounting the DVD ISO image, for example: Mount the DVD ISO image to the directory: Replace /image_directory/image.iso with the path to the DVD ISO image. Copy the files from the mounted image to the HTTP(S) server root. This command creates the /var/www/html/rhel8-install/ directory with the content of the image. Note that some other copying methods might skip the .treeinfo file which is required for a valid installation source. Entering the cp command for entire directories as shown in this procedure copies .treeinfo correctly. Start the httpd service: The installation tree is now accessible and ready to be used as the installation source. Note When configuring the installation source, use http:// or https:// as the protocol, the server host name or IP address, and the directory that contains the files from the ISO image, relative to the HTTP server root. For example, if you use HTTP, the server host name is myserver.example.com , and you have copied the files from the image to /var/www/html/rhel8-install/ , specify http://myserver.example.com/rhel8-install/ as the installation source. Additional resources Deploying different types of servers 5.4. Creating an installation source using FTP You can create an installation source for a network-based installation using an installation tree, which is a directory containing extracted contents of the DVD ISO image and a valid .treeinfo file. The installation source is accessed over FTP. Prerequisites You have an administrator-level access to a server with Red Hat Enterprise Linux 8, and this server is on the same network as the system to be installed. You have downloaded the full installation DVD ISO from the Product Downloads page. You have created a bootable CD, DVD, or USB device from the image file. You have verified that your firewall allows the system you are installing to access the remote installation source. For more information, see Ports for network-based installation . The vsftpd package is installed. Procedure Open and edit the /etc/vsftpd/vsftpd.conf configuration file in a text editor. Change the line anonymous_enable=NO to anonymous_enable=YES Change the line write_enable=YES to write_enable=NO . Add lines pasv_min_port=< min_port > and pasv_max_port=< max_port > . Replace < min_port > and < max_port > with the port number range used by FTP server in passive mode, for example, 10021 and 10031 . This step might be necessary in network environments featuring various firewall/NAT setups. Optional: Add custom changes to your configuration. For available options, see the vsftpd.conf(5) man page. This procedure assumes that default options are used. Warning If you configured SSL/TLS security in your vsftpd.conf file, ensure that you enable only the TLSv1 protocol, and disable SSLv2 and SSLv3. This is due to the POODLE SSL vulnerability (CVE-2014-3566). For more information, see the Red Hat Knowledgebase solution Resolution for POODLE SSLv3.0 vulnerability . Configure the server firewall. Enable the firewall: Start the firewall: Configure the firewall to allow the FTP port and port range from the step: Replace < min_port > and < max_port > with the port numbers you entered into the /etc/vsftpd/vsftpd.conf configuration file. Reload the firewall to apply the new rules: Copy the DVD ISO image to the FTP server. Create a suitable directory for mounting the DVD ISO image, for example: Mount the DVD ISO image to the directory: Replace /image-directory/image.iso with the path to the DVD ISO image. Copy the files from the mounted image to the FTP server root: This command creates the /var/ftp/rhel8-install/ directory with the content of the image. Some copying methods can skip the .treeinfo file which is required for a valid installation source. Entering the cp command for whole directories as shown in this procedure will copy .treeinfo correctly. Make sure that the correct SELinux context and access mode is set on the copied content: Start the vsftpd service: If the service was running before you changed the /etc/vsftpd/vsftpd.conf file, restart the service to load the edited file: Enable the vsftpd service to start during the boot process: The installation tree is now accessible and ready to be used as the installation source. When configuring the installation source, use ftp:// as the protocol, the server host name or IP address, and the directory in which you have stored the files from the ISO image, relative to the FTP server root. For example, if the server host name is myserver.example.com and you have copied the files from the image to /var/ftp/rhel8-install/ , specify ftp://myserver.example.com/rhel8-install/ as the installation source.
[ "yum install nfs-utils", "/ exported_directory / clients", "/rhel8-install *", "systemctl start nfs-server.service", "systemctl reload nfs-server.service", "mkdir /mnt/rhel8-install/", "mount -o loop,ro -t iso9660 /image_directory/image.iso /mnt/rhel8-install/", "cp -r /mnt/rhel8-install/ /var/www/html/", "systemctl start httpd.service", "systemctl enable firewalld", "systemctl start firewalld", "firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent", "firewall-cmd --reload", "mkdir /mnt/rhel8-install", "mount -o loop,ro -t iso9660 /image-directory/image.iso /mnt/rhel8-install", "mkdir /var/ftp/rhel8-install cp -r /mnt/rhel8-install/ /var/ftp/", "restorecon -r /var/ftp/rhel8-install find /var/ftp/rhel8-install -type f -exec chmod 444 {} \\; find /var/ftp/rhel8-install -type d -exec chmod 755 {} \\;", "systemctl start vsftpd.service", "systemctl restart vsftpd.service", "systemctl enable vsftpd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/preparing-network-based-repositories_rhel-installer
Chapter 8. Installation and Booting
Chapter 8. Installation and Booting The NO_DHCP_HOSTNAME option has been added The NO_DHCP_HOSTNAME option can now be specified in the /etc/sysconfig/network configuration file. Previously, in certain situations it was not possible to prevent initialization scripts from obtaining the host name through DHCP, even when using a static configuration. With this update, if the NO_DHCP_HOSTNAME option is set to yes , true , or 1 in the /etc/sysconfig/network file, initialization scripts are prevented from obtaining the host name through DHCP. (BZ# 1157856 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_installation_and_booting
Chapter 12. Downloading the Red Hat Process Automation Manager installation files
Chapter 12. Downloading the Red Hat Process Automation Manager installation files You can use the installer JAR file or deployable ZIP files to install Red Hat Process Automation Manager. You can run the installer in interactive or command line interface (CLI) mode. Alternatively, you can extract and configure the Business Central and KIE Server deployable ZIP files. If you want to run Business Central without deploying it to an application server, download the Business Central Standalone JAR file. Download a Red Hat Process Automation Manager distribution that meets your environment and installation requirements. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download one of the following product distributions, depending on your preferred installation method: Note You only need to download one of these distributions. If you want to use the installer to install Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4, download Red Hat Process Automation Manager 7.13.5 Installer ( rhpam-installer-7.13.5.jar ). The installer graphical user interface guides you through the installation process. If you want to install Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 using the deployable ZIP files, download the following files: Red Hat Process Automation Manager 7.13.5 KIE Server for All Supported EE8 Containers ( rhpam-7.13.5-kie-server-ee8.zip ) Red Hat Process Automation Manager 7.13.5 Business Central Deployable for EAP 7 ( rhpam-7.13.5-business-central-eap7-deployable.zip ) Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) To run Business Central without needing to deploy it to an application server, download Red Hat Process Automation Manager 7.13.5 Business Central Standalone ( rhpam-7.13.5-business-central-standalone.jar ).
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/install-download-proc_install-on-eap
D.16. System Catalog View
D.16. System Catalog View To open Teiid Designer's System Catalog View , click the main menu's Window > Show View > Other... and then click the Teiid Designer > System Catalog view in the dialog. Figure D.27. System Catalog View
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/system_catalog_view
Data Grid Performance and Sizing Guide
Data Grid Performance and Sizing Guide Red Hat Data Grid 8.5 Plan and size Data Grid deployments Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_performance_and_sizing_guide/index
Chapter 151. KafkaMirrorMaker2ClusterSpec schema reference
Chapter 151. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Full list of KafkaMirrorMaker2ClusterSpec schema properties Configures Kafka clusters for mirroring. Use the config properties to configure Kafka options, restricted to those properties not managed directly by Streams for Apache Kafka. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 151.1. KafkaMirrorMaker2ClusterSpec schema properties Property Property type Description alias string Alias used to reference the Kafka cluster. bootstrapServers string A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. tls ClientTls TLS configuration for connecting MirrorMaker 2 connectors to a cluster. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker 2 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkamirrormaker2clusterspec-reference
1.7. Issues with Live Migration of VMs in a RHEL cluster
1.7. Issues with Live Migration of VMs in a RHEL cluster Information on support policies for RHEL high availability clusters with virtualized cluster members can be found in Support Policies for RHEL High Availability Clusters - General Conditions with Virtualized Cluster Members . As noted, Red Hat does not support live migration of active cluster nodes across hypervisors or hosts. If you need to perform a live migration, you will first need to stop the cluster services on the VM to remove the node from the cluster, and then start the cluster back up after performing the migration. The following steps outline the procedure for removing a VM from a cluster, migrating the VM, and restoring the VM to the cluster. Note Before performing this procedure, consider the effect on cluster quorum of removing a cluster node. For example, if you have a three node cluster and you remove one node, your cluster can withstand only one more node failure. If one node of a three node cluster is already down, removing a second node will lose quorum. If any preparations need to be made before stopping or moving the resources or software running on the VM to migrate, perform those steps. Move any managed resources off the VM. If there are specific requirements or preferences for where resources should be relocated, then consider creating new location constraints to place the resources on the correct node. Place the VM in standby mode to ensure it is not considered in service, and to cause any remaining resources to be relocated elsewhere or stopped. Run the following command on the VM to stop the cluster software on the VM. Perform the live migration of the VM. Start cluster services on the VM. Take the VM out of standby mode. If you created any temporary location constraints before putting the VM in standby mode, adjust or remove those constraints to allow resources to go back to their normally preferred locations.
[ "pcs cluster standby VM", "pcs cluster stop", "pcs cluster start", "pcs cluster unstandby VM" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-migratinghavmshaar
Chapter 8. TokenRequest [authentication.k8s.io/v1]
Chapter 8. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 8.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 8.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 8.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 8.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 8.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 8.1. Global path parameters Parameter Type Description name string name of the TokenRequest namespace string object name and auth scope, such as for teams and projects Table 8.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create token of a ServiceAccount Table 8.3. Body parameters Parameter Type Description body TokenRequest schema Table 8.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/tokenrequest-authentication-k8s-io-v1
F.7. About Two Phase Commit (2PC)
F.7. About Two Phase Commit (2PC) A Two Phase Commit protocol (2PC) is a consensus protocol used to atomically commit or roll back distributed transactions. It is successful when faced with cases of temporary system failures, including network node and communication failures, and is therefore widely utilized. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/About_Two_Phase_Commit_2PC
Disconnected installation mirroring
Disconnected installation mirroring OpenShift Container Platform 4.15 Mirroring the installation container images Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/disconnected_installation_mirroring/index
Deploying and Upgrading AMQ Streams on OpenShift
Deploying and Upgrading AMQ Streams on OpenShift Red Hat AMQ 2021.q3 For use with AMQ Streams 1.8 on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/index
Getting started with Red Hat build of Quarkus
Getting started with Red Hat build of Quarkus Red Hat build of Quarkus 3.2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/getting_started_with_red_hat_build_of_quarkus/index